<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Observer Magazine</title>
    <link>https://observermagazine.github.io</link>
    <description>A free, open-source Blazor WebAssembly showcase on .NET 10</description>
    <language>en-us</language>
    <lastBuildDate>Sun, 05 Apr 2026 07:12:28 GMT</lastBuildDate>
    <item>
      <title>The Dependency Inversion Principle: A Comprehensive Guide for .NET Developers</title>
      <link>https://observermagazine.github.io/blog/dependency-injection</link>
      <description>A deep dive into the Dependency Inversion Principle — the 'D' in SOLID — covering its history, formal definition, practical C# implementations, ASP.NET Core's built-in DI container, keyed services, testing strategies, common pitfalls, and real-world architecture patterns.</description>
      <pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/dependency-injection</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<p>Picture this. It is a Tuesday afternoon. You have inherited a ten-year-old ASP.NET application. The previous developer left three months ago and there is no documentation. You open the main order processing class and find this:</p>
<pre><code class="language-csharp">public class OrderProcessor
{
    public void ProcessOrder(Order order)
    {
        var db = new SqlConnection(&quot;Server=prod-db;Database=Orders;...&quot;);
        db.Open();

        var cmd = new SqlCommand(&quot;INSERT INTO Orders ...&quot;, db);
        cmd.ExecuteNonQuery();

        var smtp = new SmtpClient(&quot;smtp.company.com&quot;);
        smtp.Send(&quot;orders@company.com&quot;, order.CustomerEmail,
            &quot;Order Confirmation&quot;, $&quot;Your order {order.Id} is confirmed.&quot;);

        var logger = new StreamWriter(&quot;C:\\Logs\\orders.log&quot;, append: true);
        logger.WriteLine($&quot;{DateTime.Now}: Order {order.Id} processed.&quot;);
        logger.Close();

        db.Close();
    }
}
</code></pre>
<p>You need to add a feature. The business wants to send SMS notifications in addition to email. You also need to write a unit test for the existing logic. You stare at the code and realize that you cannot test <code>ProcessOrder</code> without a live SQL Server, a live SMTP server, and write access to <code>C:\Logs\</code>. You cannot swap the email notification for an SMS notification without rewriting the method. You cannot change the database without changing this class. Every single dependency is hardcoded. Every change requires modifying this class. Every test requires the entire production infrastructure.</p>
<p>This is the problem that the Dependency Inversion Principle exists to solve. Not just as an academic exercise, not just as a bullet point on a job interview whiteboard, but as a practical engineering tool that determines whether your code is a flexible asset or a brittle liability.</p>
<h2 id="part-1-the-origins-where-the-dependency-inversion-principle-came-from">Part 1: The Origins — Where the Dependency Inversion Principle Came From</h2>
<p>The Dependency Inversion Principle — universally abbreviated as DIP — is the &quot;D&quot; in SOLID. Before we can appreciate what it means, we need to understand where it came from and why Robert C. Martin felt it was important enough to formalize.</p>
<h3 id="robert-c.martin-and-the-c-report">Robert C. Martin and the C++ Report</h3>
<p>Robert Cecil Martin, known universally as &quot;Uncle Bob,&quot; first articulated the Dependency Inversion Principle in a paper published in the C++ Report in May/June 1996. The paper was titled simply &quot;The Dependency Inversion Principle,&quot; and it was the third in a series of columns Martin wrote on object-oriented design principles for that magazine. The earlier columns covered the Open-Closed Principle and the Liskov Substitution Principle.</p>
<p>Martin opened the paper by observing that most software does not start out with bad design. Developers do not intentionally create rigid, fragile, immobile code. Instead, software degrades over time as requirements change and modifications accumulate. He identified three symptoms of degraded design: rigidity (difficulty making changes because every change cascades through the system), fragility (changes cause unexpected breakages in seemingly unrelated parts), and immobility (inability to reuse modules in other contexts because they are entangled with their dependencies).</p>
<p>Martin argued that the root cause of all three symptoms is the same: high-level modules depend on low-level modules. In traditional structured programming — the kind taught in computer science programs throughout the 1970s and 1980s — the natural design approach is top-down decomposition. You start with the high-level policy (&quot;process an order&quot;) and decompose it into lower-level details (&quot;write to database,&quot; &quot;send email,&quot; &quot;log to file&quot;). The result is a dependency graph where high-level modules import and call low-level modules directly. When the low-level details change — a new database, a different email provider, a different logging framework — the high-level policy must change too. The important stuff depends on the unimportant stuff.</p>
<p>Martin's insight was that this dependency direction should be inverted.</p>
<h3 id="the-solid-acronym">The SOLID Acronym</h3>
<p>Martin collected the Dependency Inversion Principle together with four other design principles — Single Responsibility, Open-Closed, Liskov Substitution, and Interface Segregation — in his 2000 paper &quot;Design Principles and Design Patterns.&quot; Around 2004, software engineer Michael Feathers noticed that the initials of these five principles spelled SOLID and coined the acronym. The name stuck. Today, SOLID is one of the most recognized concepts in software engineering, and DIP sits as its capstone.</p>
<p>Martin himself noted that DIP is not truly an independent principle. It is, in many ways, the structural consequence of rigorously applying the Open-Closed Principle and the Liskov Substitution Principle together. If your code is open for extension but closed for modification (OCP), and if your abstractions are substitutable (LSP), then your dependency arrows will naturally point toward abstractions rather than concrete details. DIP formalizes and names this pattern so that developers can reason about it explicitly.</p>
<h3 id="intellectual-ancestors">Intellectual Ancestors</h3>
<p>Martin did not invent the idea of depending on abstractions in a vacuum. The concept has roots in several earlier ideas. Bertrand Meyer's 1988 book &quot;Object-Oriented Software Construction&quot; introduced the Open-Closed Principle. Barbara Liskov's 1987 keynote at the OOPSLA conference (later formalized in a 1994 paper with Jeannette Wing) established the substitutability principle that bears her name. The Gang of Four's &quot;Design Patterns&quot; book (1994) showed dozens of patterns — Strategy, Observer, Factory, Template Method — that rely on programming to interfaces rather than implementations.</p>
<p>What Martin did was distill these ideas into a crisp, two-part formal statement and give it a name that made it memorable and teachable. That formal statement is what we will examine next.</p>
<h2 id="part-2-the-formal-definition-two-rules-that-change-everything">Part 2: The Formal Definition — Two Rules That Change Everything</h2>
<p>The Dependency Inversion Principle, as stated by Robert C. Martin in his 1996 paper, consists of two parts:</p>
<p><strong>A.</strong> High-level modules should not depend on low-level modules. Both should depend on abstractions.</p>
<p><strong>B.</strong> Abstractions should not depend on details. Details should depend on abstractions.</p>
<p>These two sentences are deceptively simple. Every word matters. Let us unpack them carefully.</p>
<h3 id="what-are-high-level-and-low-level-modules">What Are High-Level and Low-Level Modules?</h3>
<p>A &quot;module&quot; in Martin's original C++ context is roughly equivalent to a class or a namespace in C#. The distinction between &quot;high-level&quot; and &quot;low-level&quot; is about proximity to business policy versus proximity to implementation detail.</p>
<p>High-level modules contain the business rules, the policy decisions, the orchestration logic — the stuff that makes your application uniquely valuable. In an e-commerce system, the high-level module is the order processing logic that decides when to charge a customer, when to send a confirmation, and when to initiate shipping. In a blog engine, the high-level module is the content pipeline that reads markdown, resolves front matter, and assembles the output.</p>
<p>Low-level modules contain the implementation details — the stuff that can be swapped out without changing the business policy. The specific database you write to. The specific email provider you use. The specific file system path where logs are written. The specific HTTP client that calls an external API.</p>
<p>The critical insight of Part A is that the direction of dependency should not follow the direction of the call. Just because the order processor <em>calls</em> the database does not mean the order processor should <em>depend on</em> the database. Both should depend on an abstraction — an interface or abstract class — that represents the concept of &quot;storing orders&quot; without specifying how.</p>
<h3 id="what-are-abstractions-and-details">What Are Abstractions and Details?</h3>
<p>Part B makes a subtler point. It is not enough to introduce an abstraction. The abstraction itself must not be contaminated by details of any particular implementation.</p>
<p>Consider this interface:</p>
<pre><code class="language-csharp">public interface IOrderRepository
{
    Task&lt;Order?&gt; GetByIdAsync(Guid id);
    Task SaveAsync(Order order);
    SqlConnection GetConnection(); // Leaking detail!
}
</code></pre>
<p>The first two methods are proper abstractions — they describe what the repository does without revealing how. The third method violates Part B. It exposes <code>SqlConnection</code>, which is a detail of the SQL Server implementation. Any code that depends on <code>IOrderRepository</code> now transitively depends on <code>System.Data.SqlClient</code>. If you later want to implement the repository with PostgreSQL, MongoDB, or an in-memory store, every consumer of <code>IOrderRepository</code> must change.</p>
<p>A clean abstraction looks like this:</p>
<pre><code class="language-csharp">public interface IOrderRepository
{
    Task&lt;Order?&gt; GetByIdAsync(Guid id);
    Task SaveAsync(Order order);
    Task&lt;IReadOnlyList&lt;Order&gt;&gt; GetRecentAsync(int count);
}
</code></pre>
<p>Every method describes a business-level operation. No method reveals anything about the storage mechanism. The abstraction depends on the domain model (<code>Order</code>), not on infrastructure types (<code>SqlConnection</code>, <code>DbContext</code>, <code>MongoCollection&lt;T&gt;</code>).</p>
<h3 id="why-inversion">Why &quot;Inversion&quot;?</h3>
<p>Martin himself addressed this question directly in his paper. He explained that in traditional structured programming — the procedural, top-down decomposition approach that dominated software engineering through the 1970s and 1980s — the natural dependency direction is from high-level to low-level. You start with <code>main()</code>, which calls <code>processOrders()</code>, which calls <code>writeToDatabase()</code>. Each layer depends on the layer beneath it.</p>
<p>Object-oriented programming with DIP inverts this relationship. The high-level module defines the abstraction (the interface). The low-level module implements it. Both depend on the abstraction, but the abstraction lives with the high-level module, not the low-level one. The dependency arrow between the high-level module and the low-level module has been reversed — inverted — compared to what you would get from naive top-down design.</p>
<p>This is the &quot;inversion.&quot; It is not about inverting the call direction (the high-level module still calls the low-level module at runtime). It is about inverting the compile-time dependency direction.</p>
<h2 id="part-3-dip-in-plain-english-the-wall-outlet-analogy">Part 3: DIP in Plain English — The Wall Outlet Analogy</h2>
<p>If the formal definition feels abstract, here is an analogy that makes it concrete.</p>
<p>Think about the electrical outlet in your wall. Your laptop charger, your phone charger, your desk lamp, and your coffee maker all plug into the same outlet. The outlet does not know or care what is plugged into it. The coffee maker does not know or care whether the outlet is connected to a coal power plant, a solar panel, a wind turbine, or a nuclear reactor. Both the devices and the power sources depend on a shared abstraction: the electrical outlet standard (in the United States, NEMA 5-15).</p>
<p>Now imagine a world without this abstraction. Every appliance is hardwired directly to a specific power source. Your coffee maker has a copper wire that runs all the way to a specific coal plant in West Virginia. If that plant shuts down, your coffee maker stops working. If you want to switch to solar power, you need to buy a new coffee maker — one that is hardwired to a solar panel.</p>
<p>That hardwired world is what your code looks like when high-level modules depend directly on low-level modules. The electrical outlet standard is the interface. DIP says: make your code work like the real world works, with standardized outlets (interfaces) that decouple producers from consumers.</p>
<p>Another analogy that Martin himself used in his 1996 paper involves a button and a lamp. A <code>Button</code> object senses the external environment (whether a user pressed it). A <code>Lamp</code> object controls a light. Without DIP, the <code>Button</code> depends directly on the <code>Lamp</code> — it calls <code>lamp.TurnOn()</code> and <code>lamp.TurnOff()</code>. If you later want the same button to control a motor, a heater, or an alarm, you have to modify the <code>Button</code> class. With DIP, the <code>Button</code> depends on an abstraction — perhaps <code>ISwitchableDevice</code> — and the <code>Lamp</code>, <code>Motor</code>, <code>Heater</code>, and <code>Alarm</code> all implement that abstraction. The <code>Button</code> never changes.</p>
<h2 id="part-4-dip-is-not-dependency-injection-but-they-are-friends">Part 4: DIP Is Not Dependency Injection (But They Are Friends)</h2>
<p>This is the single most common source of confusion, so let us address it directly.</p>
<p><strong>Dependency Inversion Principle</strong> (DIP) is a design principle. It tells you how to structure the relationships between your modules. It says: depend on abstractions, not on concrete implementations. It is a rule about the direction of your dependency arrows.</p>
<p><strong>Dependency Injection</strong> (DI) is a technique — a specific mechanism for providing dependencies to a class from the outside rather than having the class create them internally. Constructor injection, property injection, and method injection are all forms of DI.</p>
<p><strong>Inversion of Control</strong> (IoC) is a broader design principle in which the flow of control is inverted compared to traditional programming. Instead of your code calling library code, library code calls your code (the &quot;Hollywood Principle: don't call us, we'll call you&quot;). DI is one implementation of IoC.</p>
<p><strong>IoC Container</strong> (also called a DI Container) is a framework that automates dependency injection. In .NET, the built-in <code>Microsoft.Extensions.DependencyInjection</code> is an IoC container. Third-party options like Autofac, Ninject, and StructureMap are also IoC containers.</p>
<p>Here is how they relate:</p>
<ul>
<li>DIP is the <strong>principle</strong> (depend on abstractions).</li>
<li>DI is the <strong>technique</strong> (pass dependencies in from outside).</li>
<li>IoC is the <strong>architectural pattern</strong> (invert who controls the flow).</li>
<li>IoC Container is the <strong>tool</strong> (automate the wiring).</li>
</ul>
<p>You can follow DIP without using DI. For example, you could use the Factory pattern or the Service Locator pattern to provide abstractions to your high-level modules. You can use DI without following DIP — you can inject concrete classes directly without any interfaces. But in practice, DIP and DI work together beautifully. DIP tells you to program against interfaces. DI gives you a clean mechanism for providing the implementations at runtime. And an IoC container automates the plumbing so you do not have to wire everything up by hand.</p>
<p>Martin Fowler published an influential article in January 2004 titled &quot;Inversion of Control Containers and the Dependency Injection pattern,&quot; which helped clarify the distinction between these concepts. In that article, Fowler actually coined the term &quot;Dependency Injection&quot; because he felt &quot;Inversion of Control&quot; was too generic — many things in software involve inverted control (event handlers, template methods, etc.), and he wanted a more specific name for the pattern of passing dependencies to a class.</p>
<h2 id="part-5-dip-in-c-from-theory-to-code">Part 5: DIP in C# — From Theory to Code</h2>
<p>Let us return to the order processing example from the introduction and refactor it step by step.</p>
<h3 id="step-1-identify-the-dependencies">Step 1: Identify the Dependencies</h3>
<p>The original <code>OrderProcessor</code> depends on three concrete things:</p>
<ol>
<li><code>SqlConnection</code> — for persisting orders to a database.</li>
<li><code>SmtpClient</code> — for sending email notifications.</li>
<li><code>StreamWriter</code> to a specific file path — for logging.</li>
</ol>
<p>Each of these is a low-level implementation detail. The high-level policy — &quot;when an order is placed, persist it, notify the customer, and log the event&quot; — should not depend on any of them.</p>
<h3 id="step-2-define-abstractions">Step 2: Define Abstractions</h3>
<p>We create interfaces that capture the business-level concepts without revealing implementation details:</p>
<pre><code class="language-csharp">public interface IOrderRepository
{
    Task SaveAsync(Order order, CancellationToken cancellationToken = default);
    Task&lt;Order?&gt; GetByIdAsync(Guid id, CancellationToken cancellationToken = default);
}

public interface INotificationService
{
    Task SendOrderConfirmationAsync(
        Order order,
        CancellationToken cancellationToken = default);
}

public interface IOrderLogger
{
    void LogOrderProcessed(Order order);
    void LogOrderFailed(Order order, Exception exception);
}
</code></pre>
<p>Notice several things about these interfaces:</p>
<ul>
<li>They use domain language (&quot;order confirmation,&quot; &quot;order processed&quot;) rather than infrastructure language (&quot;SMTP,&quot; &quot;SQL,&quot; &quot;file&quot;).</li>
<li>They include <code>CancellationToken</code> parameters where appropriate, because cancellation is a concept that belongs at the abstraction level.</li>
<li>They are small and focused. <code>INotificationService</code> does not also handle logging. <code>IOrderRepository</code> does not also handle notifications. This is the Interface Segregation Principle (the &quot;I&quot; in SOLID) working alongside DIP.</li>
<li>They return and accept domain types (<code>Order</code>), not infrastructure types (<code>SqlDataReader</code>, <code>MailMessage</code>).</li>
</ul>
<h3 id="step-3-implement-the-abstractions">Step 3: Implement the Abstractions</h3>
<p>Now we write concrete implementations for each interface:</p>
<pre><code class="language-csharp">public sealed class SqlOrderRepository : IOrderRepository
{
    private readonly string _connectionString;

    public SqlOrderRepository(string connectionString)
    {
        _connectionString = connectionString;
    }

    public async Task SaveAsync(Order order, CancellationToken cancellationToken = default)
    {
        await using var connection = new SqlConnection(_connectionString);
        await connection.OpenAsync(cancellationToken);

        await using var command = new SqlCommand(
            &quot;INSERT INTO Orders (Id, CustomerId, Total, CreatedAt) &quot; +
            &quot;VALUES (@Id, @CustomerId, @Total, @CreatedAt)&quot;, connection);

        command.Parameters.AddWithValue(&quot;@Id&quot;, order.Id);
        command.Parameters.AddWithValue(&quot;@CustomerId&quot;, order.CustomerId);
        command.Parameters.AddWithValue(&quot;@Total&quot;, order.Total);
        command.Parameters.AddWithValue(&quot;@CreatedAt&quot;, order.CreatedAt);

        await command.ExecuteNonQueryAsync(cancellationToken);
    }

    public async Task&lt;Order?&gt; GetByIdAsync(
        Guid id, CancellationToken cancellationToken = default)
    {
        await using var connection = new SqlConnection(_connectionString);
        await connection.OpenAsync(cancellationToken);

        await using var command = new SqlCommand(
            &quot;SELECT Id, CustomerId, Total, CreatedAt FROM Orders WHERE Id = @Id&quot;,
            connection);
        command.Parameters.AddWithValue(&quot;@Id&quot;, id);

        await using var reader = await command.ExecuteReaderAsync(cancellationToken);
        if (await reader.ReadAsync(cancellationToken))
        {
            return new Order
            {
                Id = reader.GetGuid(0),
                CustomerId = reader.GetGuid(1),
                Total = reader.GetDecimal(2),
                CreatedAt = reader.GetDateTime(3)
            };
        }

        return null;
    }
}
</code></pre>
<pre><code class="language-csharp">public sealed class EmailNotificationService : INotificationService
{
    private readonly SmtpClient _smtpClient;
    private readonly string _fromAddress;

    public EmailNotificationService(SmtpClient smtpClient, string fromAddress)
    {
        _smtpClient = smtpClient;
        _fromAddress = fromAddress;
    }

    public async Task SendOrderConfirmationAsync(
        Order order, CancellationToken cancellationToken = default)
    {
        var message = new MailMessage(
            _fromAddress,
            order.CustomerEmail,
            &quot;Order Confirmation&quot;,
            $&quot;Your order {order.Id} for ${order.Total:F2} is confirmed.&quot;);

        await _smtpClient.SendMailAsync(message, cancellationToken);
    }
}
</code></pre>
<pre><code class="language-csharp">public sealed class SerilogOrderLogger : IOrderLogger
{
    private readonly ILogger _logger;

    public SerilogOrderLogger(ILogger logger)
    {
        _logger = logger;
    }

    public void LogOrderProcessed(Order order)
    {
        _logger.Information(
            &quot;Order {OrderId} processed for customer {CustomerId}, total {Total}&quot;,
            order.Id, order.CustomerId, order.Total);
    }

    public void LogOrderFailed(Order order, Exception exception)
    {
        _logger.Error(
            exception,
            &quot;Order {OrderId} failed for customer {CustomerId}&quot;,
            order.Id, order.CustomerId);
    }
}
</code></pre>
<h3 id="step-4-refactor-the-high-level-module">Step 4: Refactor the High-Level Module</h3>
<p>Now the <code>OrderProcessor</code> depends only on abstractions:</p>
<pre><code class="language-csharp">public sealed class OrderProcessor
{
    private readonly IOrderRepository _repository;
    private readonly INotificationService _notificationService;
    private readonly IOrderLogger _logger;

    public OrderProcessor(
        IOrderRepository repository,
        INotificationService notificationService,
        IOrderLogger logger)
    {
        _repository = repository;
        _notificationService = notificationService;
        _logger = logger;
    }

    public async Task ProcessOrderAsync(
        Order order, CancellationToken cancellationToken = default)
    {
        try
        {
            await _repository.SaveAsync(order, cancellationToken);
            await _notificationService.SendOrderConfirmationAsync(
                order, cancellationToken);
            _logger.LogOrderProcessed(order);
        }
        catch (Exception ex)
        {
            _logger.LogOrderFailed(order, ex);
            throw;
        }
    }
}
</code></pre>
<p>Compare this to the original. The <code>OrderProcessor</code> no longer knows about SQL Server, SMTP, or the file system. It expresses the business policy: save the order, notify the customer, log the result. That is all it does. That is all it should do.</p>
<h3 id="step-5-wire-it-up">Step 5: Wire It Up</h3>
<p>In an ASP.NET Core application, you register your services in <code>Program.cs</code>:</p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);

// Register abstractions with their implementations
builder.Services.AddScoped&lt;IOrderRepository&gt;(sp =&gt;
    new SqlOrderRepository(
        builder.Configuration.GetConnectionString(&quot;Orders&quot;)!));

builder.Services.AddScoped&lt;INotificationService&gt;(sp =&gt;
    new EmailNotificationService(
        new SmtpClient(builder.Configuration[&quot;Smtp:Host&quot;]),
        builder.Configuration[&quot;Smtp:FromAddress&quot;]!));

builder.Services.AddSingleton&lt;IOrderLogger&gt;(sp =&gt;
    new SerilogOrderLogger(Log.Logger));

// Register the high-level module
builder.Services.AddScoped&lt;OrderProcessor&gt;();

var app = builder.Build();
</code></pre>
<p>When ASP.NET Core needs to create an <code>OrderProcessor</code>, the DI container automatically resolves <code>IOrderRepository</code>, <code>INotificationService</code>, and <code>IOrderLogger</code> and passes the registered implementations to the constructor. The <code>OrderProcessor</code> never knows — and never needs to know — which implementations it receives.</p>
<h2 id="part-6-dip-and-asp.net-cores-built-in-di-container">Part 6: DIP and ASP.NET Core's Built-In DI Container</h2>
<p>ASP.NET Core was designed from the ground up with dependency injection as a first-class citizen. The entire framework follows DIP. When you register middleware, configure authentication, add logging, or set up Entity Framework Core, you are registering implementations against abstractions that the framework resolves at runtime.</p>
<h3 id="service-lifetimes">Service Lifetimes</h3>
<p>The built-in DI container in <code>Microsoft.Extensions.DependencyInjection</code> supports three service lifetimes:</p>
<p><strong>Transient</strong> — a new instance is created every time the service is requested. Use this for lightweight, stateless services where creating a new instance is cheap. Register with <code>AddTransient&lt;TService, TImplementation&gt;()</code>.</p>
<pre><code class="language-csharp">builder.Services.AddTransient&lt;IEmailSender, SmtpEmailSender&gt;();
</code></pre>
<p><strong>Scoped</strong> — one instance is created per scope. In ASP.NET Core, a scope corresponds to a single HTTP request. Every service resolved within the same request gets the same instance. Use this for services that hold per-request state, like an Entity Framework <code>DbContext</code>. Register with <code>AddScoped&lt;TService, TImplementation&gt;()</code>.</p>
<pre><code class="language-csharp">builder.Services.AddScoped&lt;IOrderRepository, EfOrderRepository&gt;();
</code></pre>
<p><strong>Singleton</strong> — one instance for the entire lifetime of the application. The first time the service is requested, an instance is created; every subsequent request gets the same instance. Use this for expensive-to-create objects, configuration wrappers, and services that maintain application-wide state. Register with <code>AddSingleton&lt;TService, TImplementation&gt;()</code>.</p>
<pre><code class="language-csharp">builder.Services.AddSingleton&lt;ICacheService, MemoryCacheService&gt;();
</code></pre>
<p>A common pitfall is injecting a scoped service into a singleton. The scoped service will be captured by the singleton and effectively become a singleton itself, which can cause data leakage between requests. ASP.NET Core will throw an <code>InvalidOperationException</code> at startup if you enable scope validation (which is on by default in the Development environment).</p>
<h3 id="keyed-services.net-8">Keyed Services (.NET 8+)</h3>
<p>Starting with .NET 8, the built-in DI container supports keyed services. This solves a long-standing problem: what if you have multiple implementations of the same interface and you need to resolve a specific one in different places?</p>
<p>Before keyed services, you had three unappealing options: inject <code>IEnumerable&lt;INotificationService&gt;</code> and filter manually, write a custom factory, or use the service locator anti-pattern. Keyed services provide a clean, built-in solution.</p>
<pre><code class="language-csharp">// Register multiple implementations with different keys
builder.Services.AddKeyedScoped&lt;INotificationService, EmailNotificationService&gt;(&quot;email&quot;);
builder.Services.AddKeyedScoped&lt;INotificationService, SmsNotificationService&gt;(&quot;sms&quot;);
builder.Services.AddKeyedScoped&lt;INotificationService, PushNotificationService&gt;(&quot;push&quot;);
</code></pre>
<p>Resolve a specific implementation using the <code>[FromKeyedServices]</code> attribute:</p>
<pre><code class="language-csharp">public class OrderProcessor
{
    private readonly INotificationService _emailSender;
    private readonly INotificationService _smsSender;

    public OrderProcessor(
        [FromKeyedServices(&quot;email&quot;)] INotificationService emailSender,
        [FromKeyedServices(&quot;sms&quot;)] INotificationService smsSender)
    {
        _emailSender = emailSender;
        _smsSender = smsSender;
    }

    public async Task ProcessOrderAsync(Order order, CancellationToken ct = default)
    {
        // Send both email and SMS
        await _emailSender.SendOrderConfirmationAsync(order, ct);
        await _smsSender.SendOrderConfirmationAsync(order, ct);
    }
}
</code></pre>
<p>In Blazor components, you can use keyed services with the <code>[Inject]</code> attribute:</p>
<pre><code class="language-razor">@code {
    [Inject(Key = &quot;email&quot;)]
    public INotificationService? EmailService { get; set; }
}
</code></pre>
<p>A notable change in .NET 10 is that calling <code>GetKeyedService()</code> (singular) with <code>KeyedService.AnyKey</code> now throws an <code>InvalidOperationException</code>, because <code>AnyKey</code> is intended for resolving collections of services, not a single service. This is a correction that prevents ambiguous resolution bugs.</p>
<h3 id="open-generics">Open Generics</h3>
<p>The DI container supports open generic registrations, which is a powerful way to apply DIP across an entire category of services:</p>
<pre><code class="language-csharp">// Register a generic repository for any entity type
builder.Services.AddScoped(typeof(IRepository&lt;&gt;), typeof(EfRepository&lt;&gt;));
</code></pre>
<p>Now whenever the container encounters a request for <code>IRepository&lt;Customer&gt;</code>, <code>IRepository&lt;Order&gt;</code>, or <code>IRepository&lt;Product&gt;</code>, it automatically creates the corresponding <code>EfRepository&lt;Customer&gt;</code>, <code>EfRepository&lt;Order&gt;</code>, or <code>EfRepository&lt;Product&gt;</code>. You write the interface once, the implementation once, and the container handles all the concrete generic types.</p>
<pre><code class="language-csharp">public interface IRepository&lt;T&gt; where T : class
{
    Task&lt;T?&gt; GetByIdAsync(Guid id, CancellationToken ct = default);
    Task&lt;IReadOnlyList&lt;T&gt;&gt; GetAllAsync(CancellationToken ct = default);
    Task AddAsync(T entity, CancellationToken ct = default);
    Task UpdateAsync(T entity, CancellationToken ct = default);
    Task DeleteAsync(Guid id, CancellationToken ct = default);
}

public class EfRepository&lt;T&gt; : IRepository&lt;T&gt; where T : class
{
    private readonly AppDbContext _context;

    public EfRepository(AppDbContext context)
    {
        _context = context;
    }

    public async Task&lt;T?&gt; GetByIdAsync(Guid id, CancellationToken ct = default)
        =&gt; await _context.Set&lt;T&gt;().FindAsync([id], ct);

    public async Task&lt;IReadOnlyList&lt;T&gt;&gt; GetAllAsync(CancellationToken ct = default)
        =&gt; await _context.Set&lt;T&gt;().ToListAsync(ct);

    public async Task AddAsync(T entity, CancellationToken ct = default)
    {
        await _context.Set&lt;T&gt;().AddAsync(entity, ct);
        await _context.SaveChangesAsync(ct);
    }

    public async Task UpdateAsync(T entity, CancellationToken ct = default)
    {
        _context.Set&lt;T&gt;().Update(entity);
        await _context.SaveChangesAsync(ct);
    }

    public async Task DeleteAsync(Guid id, CancellationToken ct = default)
    {
        var entity = await _context.Set&lt;T&gt;().FindAsync([id], ct);
        if (entity is not null)
        {
            _context.Set&lt;T&gt;().Remove(entity);
            await _context.SaveChangesAsync(ct);
        }
    }
}
</code></pre>
<h3 id="factory-registrations">Factory Registrations</h3>
<p>Sometimes you need more control over how a service is created. Factory registrations let you provide a delegate that constructs the service:</p>
<pre><code class="language-csharp">builder.Services.AddScoped&lt;IOrderRepository&gt;(sp =&gt;
{
    var config = sp.GetRequiredService&lt;IConfiguration&gt;();
    var connectionString = config.GetConnectionString(&quot;Orders&quot;)
        ?? throw new InvalidOperationException(&quot;Missing connection string.&quot;);

    var logger = sp.GetRequiredService&lt;ILogger&lt;NpgsqlOrderRepository&gt;&gt;();

    return new NpgsqlOrderRepository(connectionString, logger);
});
</code></pre>
<p>This is useful when the implementation's constructor requires values that are not themselves registered services (like a connection string), or when you need conditional logic to decide which implementation to create.</p>
<h2 id="part-7-dip-enables-testing-the-practical-payoff">Part 7: DIP Enables Testing — The Practical Payoff</h2>
<p>If there is one argument that convinces skeptical developers to adopt DIP, it is testability. When your high-level modules depend on abstractions, you can substitute test doubles — mocks, stubs, fakes — for the real implementations. This means you can write fast, isolated unit tests that do not require a database, a network connection, an SMTP server, or any other external infrastructure.</p>
<h3 id="testing-without-dip">Testing Without DIP</h3>
<p>Without DIP, testing the original <code>OrderProcessor</code> requires all of its infrastructure to be available:</p>
<pre><code class="language-csharp">// This is NOT a unit test. This is an integration test that requires:
// - A running SQL Server instance
// - A running SMTP server
// - Write access to C:\Logs\
// - Network connectivity
// It is slow, flaky, and expensive to maintain.
[Fact]
public void ProcessOrder_ShouldNotThrow()
{
    var processor = new OrderProcessor();
    var order = new Order
    {
        Id = Guid.NewGuid(),
        CustomerEmail = &quot;test@example.com&quot;,
        Total = 99.99m
    };

    // This will actually try to connect to a database and send an email
    processor.ProcessOrder(order);
}
</code></pre>
<p>This test will fail in CI/CD unless you have a full infrastructure stack running. It is slow because it makes real network calls. It is flaky because SMTP servers sometimes time out. It tests too many things at once — a failure could be in the business logic, the database, the email server, or the logging system.</p>
<h3 id="testing-with-dip">Testing With DIP</h3>
<p>With DIP, you substitute lightweight test doubles and test the business logic in isolation:</p>
<pre><code class="language-csharp">public class OrderProcessorTests
{
    [Fact]
    public async Task ProcessOrderAsync_ShouldSaveAndNotifyAndLog()
    {
        // Arrange
        var savedOrders = new List&lt;Order&gt;();
        var notifiedOrders = new List&lt;Order&gt;();
        var loggedOrders = new List&lt;Order&gt;();

        var mockRepository = new FakeOrderRepository(savedOrders);
        var mockNotification = new FakeNotificationService(notifiedOrders);
        var mockLogger = new FakeOrderLogger(loggedOrders);

        var processor = new OrderProcessor(
            mockRepository, mockNotification, mockLogger);

        var order = new Order
        {
            Id = Guid.NewGuid(),
            CustomerId = Guid.NewGuid(),
            CustomerEmail = &quot;test@example.com&quot;,
            Total = 99.99m,
            CreatedAt = DateTime.UtcNow
        };

        // Act
        await processor.ProcessOrderAsync(order);

        // Assert
        Assert.Single(savedOrders);
        Assert.Equal(order.Id, savedOrders[0].Id);

        Assert.Single(notifiedOrders);
        Assert.Equal(order.Id, notifiedOrders[0].Id);

        Assert.Single(loggedOrders);
        Assert.Equal(order.Id, loggedOrders[0].Id);
    }

    [Fact]
    public async Task ProcessOrderAsync_WhenSaveFails_ShouldLogAndRethrow()
    {
        // Arrange
        var failingRepository = new FailingOrderRepository();
        var mockNotification = new FakeNotificationService(new List&lt;Order&gt;());
        var loggedFailures = new List&lt;(Order, Exception)&gt;();
        var mockLogger = new FakeOrderLogger(failedOrders: loggedFailures);

        var processor = new OrderProcessor(
            failingRepository, mockNotification, mockLogger);

        var order = new Order
        {
            Id = Guid.NewGuid(),
            CustomerId = Guid.NewGuid(),
            CustomerEmail = &quot;test@example.com&quot;,
            Total = 50.00m,
            CreatedAt = DateTime.UtcNow
        };

        // Act &amp; Assert
        await Assert.ThrowsAsync&lt;InvalidOperationException&gt;(
            () =&gt; processor.ProcessOrderAsync(order));

        Assert.Single(loggedFailures);
        Assert.Equal(order.Id, loggedFailures[0].Item1.Id);
    }
}
</code></pre>
<p>Here are the simple fakes used in those tests:</p>
<pre><code class="language-csharp">public class FakeOrderRepository : IOrderRepository
{
    private readonly List&lt;Order&gt; _savedOrders;

    public FakeOrderRepository(List&lt;Order&gt; savedOrders)
    {
        _savedOrders = savedOrders;
    }

    public Task SaveAsync(Order order, CancellationToken ct = default)
    {
        _savedOrders.Add(order);
        return Task.CompletedTask;
    }

    public Task&lt;Order?&gt; GetByIdAsync(Guid id, CancellationToken ct = default)
        =&gt; Task.FromResult(_savedOrders.FirstOrDefault(o =&gt; o.Id == id));
}

public class FailingOrderRepository : IOrderRepository
{
    public Task SaveAsync(Order order, CancellationToken ct = default)
        =&gt; throw new InvalidOperationException(&quot;Database is unavailable.&quot;);

    public Task&lt;Order?&gt; GetByIdAsync(Guid id, CancellationToken ct = default)
        =&gt; throw new InvalidOperationException(&quot;Database is unavailable.&quot;);
}

public class FakeNotificationService : INotificationService
{
    private readonly List&lt;Order&gt; _notifiedOrders;

    public FakeNotificationService(List&lt;Order&gt; notifiedOrders)
    {
        _notifiedOrders = notifiedOrders;
    }

    public Task SendOrderConfirmationAsync(
        Order order, CancellationToken ct = default)
    {
        _notifiedOrders.Add(order);
        return Task.CompletedTask;
    }
}

public class FakeOrderLogger : IOrderLogger
{
    private readonly List&lt;Order&gt;? _processedOrders;
    private readonly List&lt;(Order, Exception)&gt;? _failedOrders;

    public FakeOrderLogger(
        List&lt;Order&gt;? processedOrders = null,
        List&lt;(Order, Exception)&gt;? failedOrders = null)
    {
        _processedOrders = processedOrders;
        _failedOrders = failedOrders;
    }

    public void LogOrderProcessed(Order order)
        =&gt; _processedOrders?.Add(order);

    public void LogOrderFailed(Order order, Exception exception)
        =&gt; _failedOrders?.Add((order, exception));
}
</code></pre>
<p>These tests run in milliseconds. They require no infrastructure. They fail only when the business logic is wrong, not when the database is down. They can run in CI/CD, on a developer's laptop, on a plane without internet. This is the practical payoff of DIP.</p>
<h3 id="using-mocking-libraries">Using Mocking Libraries</h3>
<p>Hand-written fakes are simple and transparent, but for larger codebases, mocking libraries reduce boilerplate. Here is the same test using NSubstitute (a popular, free .NET mocking library):</p>
<pre><code class="language-csharp">using NSubstitute;

public class OrderProcessorNSubstituteTests
{
    [Fact]
    public async Task ProcessOrderAsync_ShouldCallAllDependencies()
    {
        // Arrange
        var repository = Substitute.For&lt;IOrderRepository&gt;();
        var notification = Substitute.For&lt;INotificationService&gt;();
        var logger = Substitute.For&lt;IOrderLogger&gt;();

        var processor = new OrderProcessor(repository, notification, logger);

        var order = new Order
        {
            Id = Guid.NewGuid(),
            CustomerId = Guid.NewGuid(),
            CustomerEmail = &quot;test@example.com&quot;,
            Total = 75.00m,
            CreatedAt = DateTime.UtcNow
        };

        // Act
        await processor.ProcessOrderAsync(order);

        // Assert
        await repository.Received(1).SaveAsync(order, Arg.Any&lt;CancellationToken&gt;());
        await notification.Received(1)
            .SendOrderConfirmationAsync(order, Arg.Any&lt;CancellationToken&gt;());
        logger.Received(1).LogOrderProcessed(order);
    }
}
</code></pre>
<p>NSubstitute creates a proxy object that implements the interface and records all calls made to it. The <code>Received(1)</code> assertion verifies that each method was called exactly once. This works because <code>OrderProcessor</code> depends on interfaces, not on concrete classes. Without DIP, NSubstitute (or Moq, or FakeItEasy, or any other mocking library) cannot create the proxy because there is no interface to proxy.</p>
<h2 id="part-8-architectural-patterns-that-rely-on-dip">Part 8: Architectural Patterns That Rely on DIP</h2>
<p>DIP is not just a class-level concern. Several well-known architectural patterns are built on DIP as a foundation.</p>
<h3 id="clean-architecture">Clean Architecture</h3>
<p>Robert C. Martin's Clean Architecture (described in his 2017 book of the same name) is, at its core, an application of DIP at the architectural level. The architecture is organized in concentric rings:</p>
<ol>
<li><strong>Entities</strong> (innermost) — enterprise-wide business rules.</li>
<li><strong>Use Cases</strong> — application-specific business rules.</li>
<li><strong>Interface Adapters</strong> — controllers, presenters, gateways.</li>
<li><strong>Frameworks and Drivers</strong> (outermost) — the web framework, the database, the UI.</li>
</ol>
<p>The &quot;Dependency Rule&quot; of Clean Architecture states that dependencies can only point inward. The inner rings know nothing about the outer rings. The use case layer defines the repository interface; the infrastructure layer implements it. This is DIP applied at the package and project level.</p>
<p>In a .NET solution, this typically looks like:</p>
<pre><code>MyApp.Domain/           (entities, value objects, domain events)
MyApp.Application/      (use cases, interfaces like IOrderRepository)
MyApp.Infrastructure/   (EF Core DbContext, email service, file system)
MyApp.Web/              (ASP.NET Core controllers, Blazor pages, Program.cs)
</code></pre>
<p><code>MyApp.Application</code> has a project reference to <code>MyApp.Domain</code> (inward). <code>MyApp.Infrastructure</code> has project references to both <code>MyApp.Application</code> and <code>MyApp.Domain</code> (inward). <code>MyApp.Web</code> references everything and is responsible for wiring up the DI container. The dependency arrows always point inward, toward the domain.</p>
<h3 id="hexagonal-architecture-ports-and-adapters">Hexagonal Architecture (Ports and Adapters)</h3>
<p>Alistair Cockburn's Hexagonal Architecture (2005) predates Clean Architecture and expresses a very similar idea using different terminology. The &quot;ports&quot; are the interfaces (abstractions) that the core application defines. The &quot;adapters&quot; are the concrete implementations that connect the core to the outside world — a database adapter, an HTTP adapter, a messaging adapter. The core depends only on the ports. The adapters depend on the ports and implement them.</p>
<p>In DIP terms: the ports are the abstractions that the high-level module (the core application) defines. The adapters are the low-level modules (the infrastructure) that implement those abstractions.</p>
<h3 id="the-strategy-pattern">The Strategy Pattern</h3>
<p>The Strategy pattern from the Gang of Four is perhaps the simplest manifestation of DIP. A class delegates part of its behavior to an interchangeable strategy object, accessed through an interface:</p>
<pre><code class="language-csharp">public interface IDiscountStrategy
{
    decimal CalculateDiscount(Order order);
}

public class NoDiscount : IDiscountStrategy
{
    public decimal CalculateDiscount(Order order) =&gt; 0m;
}

public class PercentageDiscount : IDiscountStrategy
{
    private readonly decimal _percentage;

    public PercentageDiscount(decimal percentage)
    {
        _percentage = percentage;
    }

    public decimal CalculateDiscount(Order order)
        =&gt; order.Total * _percentage / 100m;
}

public class LoyaltyDiscount : IDiscountStrategy
{
    private readonly ICustomerRepository _customerRepository;

    public LoyaltyDiscount(ICustomerRepository customerRepository)
    {
        _customerRepository = customerRepository;
    }

    public decimal CalculateDiscount(Order order)
    {
        var customer = _customerRepository.GetById(order.CustomerId);
        if (customer is null) return 0m;

        return customer.OrderCount switch
        {
            &gt;= 100 =&gt; order.Total * 0.15m,
            &gt;= 50 =&gt; order.Total * 0.10m,
            &gt;= 10 =&gt; order.Total * 0.05m,
            _ =&gt; 0m
        };
    }
}

public class OrderPricingService
{
    private readonly IDiscountStrategy _discountStrategy;

    public OrderPricingService(IDiscountStrategy discountStrategy)
    {
        _discountStrategy = discountStrategy;
    }

    public decimal CalculateFinalPrice(Order order)
    {
        var discount = _discountStrategy.CalculateDiscount(order);
        return order.Total - discount;
    }
}
</code></pre>
<p>The <code>OrderPricingService</code> (high-level) depends on <code>IDiscountStrategy</code> (abstraction), not on any concrete discount implementation (detail). You can swap discount strategies without modifying the pricing service. You can test the pricing service with a mock discount strategy. You can add new discount strategies without touching any existing code. That is DIP, OCP, and LSP all working together.</p>
<h3 id="the-repository-pattern">The Repository Pattern</h3>
<p>The Repository pattern, popularized by Martin Fowler's &quot;Patterns of Enterprise Application Architecture&quot; (2002) and widely used in .NET, is another direct application of DIP:</p>
<pre><code class="language-csharp">public interface IProductRepository
{
    Task&lt;Product?&gt; GetByIdAsync(int id, CancellationToken ct = default);
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; SearchAsync(
        string query, CancellationToken ct = default);
    Task AddAsync(Product product, CancellationToken ct = default);
    Task UpdateAsync(Product product, CancellationToken ct = default);
}
</code></pre>
<p>Your business logic depends on <code>IProductRepository</code>. Whether the implementation uses Entity Framework Core with SQL Server, Dapper with PostgreSQL, an in-memory list for testing, or a REST API call to a microservice — the business logic does not know and does not care. The abstraction (the interface) lives in the domain or application layer. The implementation (the concrete class) lives in the infrastructure layer. Dependencies point inward.</p>
<h2 id="part-9-common-pitfalls-and-anti-patterns">Part 9: Common Pitfalls and Anti-Patterns</h2>
<p>DIP is widely taught but frequently misapplied. Here are the most common mistakes, with explanations of why they are mistakes and how to fix them.</p>
<h3 id="pitfall-1-interface-per-class-the-ifoo-for-every-foo-problem">Pitfall 1: Interface Per Class — The &quot;IFoo for Every Foo&quot; Problem</h3>
<p>Some developers learn that DIP means &quot;always program against interfaces&quot; and conclude that every single class needs a corresponding interface. The result is a codebase littered with interfaces like <code>IUserService</code>, <code>IUserServiceImpl</code>, <code>IOrderHelper</code>, <code>IOrderHelperImpl</code> — where each interface has exactly one implementation that will never be swapped out.</p>
<p>This is cargo cult programming. DIP says to depend on abstractions <em>when the dependency direction matters</em>. If a class is a simple data-transfer object, a value object, or a utility with no side effects, wrapping it in an interface adds ceremony without benefit.</p>
<p>The guideline: introduce an interface when at least one of these is true:</p>
<ul>
<li>The dependency crosses an architectural boundary (e.g., between your application layer and your infrastructure layer).</li>
<li>You need to substitute the dependency in tests (typically because it has side effects like I/O, network calls, or database access).</li>
<li>You realistically expect multiple implementations (different database backends, different notification channels, different caching strategies).</li>
<li>The dependency is expensive or slow and you need to mock it for fast unit tests.</li>
</ul>
<p>If none of these apply, it is perfectly fine for one class to depend on another class directly. DIP is about managing the dependencies that matter, not about wrapping everything in interfaces as a ritual.</p>
<h3 id="pitfall-2-leaky-abstractions">Pitfall 2: Leaky Abstractions</h3>
<p>An abstraction that reveals implementation details defeats the purpose of DIP. We saw an example earlier with <code>GetConnection()</code> on a repository interface. Here are more subtle examples:</p>
<pre><code class="language-csharp">// Bad: The interface knows about Entity Framework
public interface IProductRepository
{
    IQueryable&lt;Product&gt; GetQueryable(); // Leaks EF's IQueryable
    Task SaveChangesAsync(); // Leaks EF's unit-of-work pattern
}

// Bad: The interface knows about HTTP
public interface IWeatherService
{
    Task&lt;HttpResponseMessage&gt; GetForecastAsync(string city);
    // Returns HttpResponseMessage — what if we switch to gRPC?
}

// Good: The interface speaks domain language
public interface IProductRepository
{
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; SearchAsync(
        string query, int page, int pageSize, CancellationToken ct = default);
    Task&lt;Product?&gt; GetByIdAsync(int id, CancellationToken ct = default);
}

// Good: The interface returns domain objects
public interface IWeatherService
{
    Task&lt;WeatherForecast?&gt; GetForecastAsync(
        string city, CancellationToken ct = default);
}
</code></pre>
<p>The test for a clean abstraction: could you implement this interface with a completely different technology without changing any consumer code? If <code>IProductRepository</code> returns <code>IQueryable&lt;Product&gt;</code>, consumers will write LINQ queries that only work with Entity Framework. If <code>IWeatherService</code> returns <code>HttpResponseMessage</code>, consumers must parse HTTP. The abstraction has been contaminated by the detail.</p>
<h3 id="pitfall-3-constructor-over-injection">Pitfall 3: Constructor Over-Injection</h3>
<p>When a class accepts seven or eight dependencies through its constructor, it is often a sign that the class has too many responsibilities — a Single Responsibility Principle violation, not a DIP problem. But the symptom appears at the DIP boundary (the constructor).</p>
<pre><code class="language-csharp">// This class probably does too much
public class OrderService(
    IOrderRepository orderRepository,
    ICustomerRepository customerRepository,
    IInventoryService inventoryService,
    IPaymentGateway paymentGateway,
    INotificationService notificationService,
    IDiscountService discountService,
    ITaxCalculator taxCalculator,
    IShippingService shippingService,
    IAuditLogger auditLogger)
{
    // ...
}
</code></pre>
<p>The fix is not to reduce the number of interfaces. The fix is to decompose the class into smaller, focused classes, each with two or three dependencies. Perhaps <code>OrderService</code> delegates pricing to a <code>PricingService</code> (which takes <code>IDiscountService</code> and <code>ITaxCalculator</code>), fulfillment to a <code>FulfillmentService</code> (which takes <code>IInventoryService</code> and <code>IShippingService</code>), and notification to the <code>INotificationService</code> directly.</p>
<h3 id="pitfall-4-the-service-locator-anti-pattern">Pitfall 4: The Service Locator Anti-Pattern</h3>
<p>The Service Locator pattern uses a central registry to resolve dependencies at runtime. Instead of receiving dependencies through the constructor, a class asks the service locator for what it needs:</p>
<pre><code class="language-csharp">// Anti-pattern: Service Locator
public class OrderProcessor
{
    public async Task ProcessOrderAsync(Order order)
    {
        // Asking for dependencies at runtime
        var repository = ServiceLocator.Get&lt;IOrderRepository&gt;();
        var notification = ServiceLocator.Get&lt;INotificationService&gt;();

        await repository.SaveAsync(order);
        await notification.SendOrderConfirmationAsync(order);
    }
}
</code></pre>
<p>This superficially follows DIP — the class depends on interfaces, not concrete types. But it violates the spirit of DIP in several important ways:</p>
<ul>
<li><strong>Hidden dependencies.</strong> You cannot tell what <code>OrderProcessor</code> needs by looking at its constructor. The dependencies are buried in the method bodies. A developer must read every line of code to understand what the class depends on.</li>
<li><strong>Untestable without infrastructure.</strong> To test <code>OrderProcessor</code>, you must set up a <code>ServiceLocator</code> with the right registrations. This is more complex and fragile than simple constructor injection.</li>
<li><strong>Tight coupling to the locator.</strong> The class depends on <code>ServiceLocator</code>, which is itself a concrete implementation detail. You have replaced concrete dependencies with a single, global concrete dependency.</li>
</ul>
<p>The fix is straightforward: use constructor injection instead. Let the DI container do the locating. Your classes should receive their dependencies, not go looking for them.</p>
<h3 id="pitfall-5-applying-dip-where-it-does-not-belong">Pitfall 5: Applying DIP Where It Does Not Belong</h3>
<p>Not every dependency needs to be inverted. Consider:</p>
<pre><code class="language-csharp">public class FullName
{
    public string First { get; }
    public string Last { get; }

    public FullName(string first, string last)
    {
        First = first;
        Last = last;
    }

    public override string ToString() =&gt; $&quot;{First} {Last}&quot;;
}
</code></pre>
<p>Should <code>FullName</code> have an <code>IFullName</code> interface? No. It is a value object with no side effects, no I/O, no external dependencies. It is trivially testable as-is. Wrapping it in an interface would add complexity for zero benefit.</p>
<p>Similarly, <code>System.Math</code>, <code>System.Guid</code>, <code>System.DateTime.UtcNow</code> (through an <code>ITimeProvider</code> in .NET 8+ or <code>TimeProvider</code> abstract class), <code>string</code> manipulation methods, and pure computation functions generally do not need abstraction. The exception is when these are difficult to control in tests (like <code>DateTime.Now</code>, which motivated .NET 8's <code>TimeProvider</code>).</p>
<h3 id="pitfall-6-ignoring-the-ownership-question">Pitfall 6: Ignoring the Ownership Question</h3>
<p>DIP says that both high-level and low-level modules should depend on abstractions. But who <em>owns</em> the abstraction?</p>
<p>If the low-level module defines the interface, you have not actually achieved inversion. You have just added an interface that still lives in the infrastructure layer. The high-level module still has a project reference to the infrastructure project. If you swap the infrastructure, you must change the high-level project's references.</p>
<p>The correct ownership: the interface lives with the code that <em>uses</em> it (the high-level module), not the code that <em>implements</em> it (the low-level module). In a Clean Architecture solution:</p>
<pre><code>MyApp.Application/
    Interfaces/
        IOrderRepository.cs     &lt;-- The interface lives HERE
        INotificationService.cs

MyApp.Infrastructure/
    Repositories/
        EfOrderRepository.cs    &lt;-- The implementation lives HERE
    Services/
        SmtpNotificationService.cs
</code></pre>
<p><code>MyApp.Infrastructure</code> has a project reference to <code>MyApp.Application</code> so it can implement the interfaces. <code>MyApp.Application</code> has no reference to <code>MyApp.Infrastructure</code>. The dependency arrow points inward. This is the inversion.</p>
<h2 id="part-10-dip-in-real-world.net-applications-beyond-the-textbook">Part 10: DIP in Real-World .NET Applications — Beyond the Textbook</h2>
<h3 id="example-1-swapping-database-providers">Example 1: Swapping Database Providers</h3>
<p>One of the most powerful demonstrations of DIP is swapping database providers without changing business logic. Imagine you started with SQL Server and need to migrate to PostgreSQL:</p>
<pre><code class="language-csharp">// Application layer: the interface (unchanged)
public interface IOrderRepository
{
    Task&lt;Order?&gt; GetByIdAsync(Guid id, CancellationToken ct = default);
    Task&lt;IReadOnlyList&lt;Order&gt;&gt; GetRecentAsync(int count, CancellationToken ct = default);
    Task SaveAsync(Order order, CancellationToken ct = default);
}

// Infrastructure layer: SQL Server implementation
public sealed class SqlServerOrderRepository : IOrderRepository
{
    private readonly string _connectionString;

    public SqlServerOrderRepository(string connectionString)
    {
        _connectionString = connectionString;
    }

    public async Task&lt;Order?&gt; GetByIdAsync(Guid id, CancellationToken ct = default)
    {
        await using var conn = new SqlConnection(_connectionString);
        return await conn.QueryFirstOrDefaultAsync&lt;Order&gt;(
            &quot;SELECT * FROM Orders WHERE Id = @Id&quot;, new { Id = id });
    }

    public async Task&lt;IReadOnlyList&lt;Order&gt;&gt; GetRecentAsync(
        int count, CancellationToken ct = default)
    {
        await using var conn = new SqlConnection(_connectionString);
        var results = await conn.QueryAsync&lt;Order&gt;(
            &quot;SELECT TOP (@Count) * FROM Orders ORDER BY CreatedAt DESC&quot;,
            new { Count = count });
        return results.ToList();
    }

    public async Task SaveAsync(Order order, CancellationToken ct = default)
    {
        await using var conn = new SqlConnection(_connectionString);
        await conn.ExecuteAsync(
            &quot;INSERT INTO Orders (Id, CustomerId, Total, CreatedAt) &quot; +
            &quot;VALUES (@Id, @CustomerId, @Total, @CreatedAt)&quot;, order);
    }
}

// Infrastructure layer: PostgreSQL implementation (new)
public sealed class NpgsqlOrderRepository : IOrderRepository
{
    private readonly string _connectionString;

    public NpgsqlOrderRepository(string connectionString)
    {
        _connectionString = connectionString;
    }

    public async Task&lt;Order?&gt; GetByIdAsync(Guid id, CancellationToken ct = default)
    {
        await using var conn = new NpgsqlConnection(_connectionString);
        return await conn.QueryFirstOrDefaultAsync&lt;Order&gt;(
            &quot;SELECT * FROM orders WHERE id = @Id&quot;, new { Id = id });
    }

    public async Task&lt;IReadOnlyList&lt;Order&gt;&gt; GetRecentAsync(
        int count, CancellationToken ct = default)
    {
        await using var conn = new NpgsqlConnection(_connectionString);
        var results = await conn.QueryAsync&lt;Order&gt;(
            &quot;SELECT * FROM orders ORDER BY created_at DESC LIMIT @Count&quot;,
            new { Count = count });
        return results.ToList();
    }

    public async Task SaveAsync(Order order, CancellationToken ct = default)
    {
        await using var conn = new NpgsqlConnection(_connectionString);
        await conn.ExecuteAsync(
            &quot;INSERT INTO orders (id, customer_id, total, created_at) &quot; +
            &quot;VALUES (@Id, @CustomerId, @Total, @CreatedAt)&quot;, order);
    }
}
</code></pre>
<p>The migration happens entirely in the infrastructure layer and the DI registration:</p>
<pre><code class="language-csharp">// Before: SQL Server
builder.Services.AddScoped&lt;IOrderRepository&gt;(sp =&gt;
    new SqlServerOrderRepository(
        builder.Configuration.GetConnectionString(&quot;Orders&quot;)!));

// After: PostgreSQL
builder.Services.AddScoped&lt;IOrderRepository&gt;(sp =&gt;
    new NpgsqlOrderRepository(
        builder.Configuration.GetConnectionString(&quot;Orders&quot;)!));
</code></pre>
<p>One line changes in <code>Program.cs</code>. Zero lines change in the application layer. Zero lines change in the domain layer. Zero tests break (assuming the PostgreSQL implementation passes the same integration test suite as the SQL Server one). This is the promise of DIP fulfilled.</p>
<h3 id="example-2-feature-flags-and-branch-by-abstraction">Example 2: Feature Flags and Branch by Abstraction</h3>
<p>DIP enables branch by abstraction, a technique for making large-scale changes to a codebase without long-lived branches. You define an interface for the behavior you want to change, implement both the old and new versions behind it, and use a feature flag to switch between them at runtime:</p>
<pre><code class="language-csharp">public interface IPricingEngine
{
    decimal CalculatePrice(Product product, Customer customer);
}

public class LegacyPricingEngine : IPricingEngine
{
    public decimal CalculatePrice(Product product, Customer customer)
    {
        // The old pricing logic
        return product.BasePrice * 1.08m; // Simple 8% markup
    }
}

public class NewPricingEngine : IPricingEngine
{
    private readonly IDiscountStrategy _discountStrategy;

    public NewPricingEngine(IDiscountStrategy discountStrategy)
    {
        _discountStrategy = discountStrategy;
    }

    public decimal CalculatePrice(Product product, Customer customer)
    {
        // The new, more sophisticated pricing logic
        var basePrice = product.BasePrice;
        var discount = _discountStrategy.CalculateDiscount(
            new Order { Total = basePrice, CustomerId = customer.Id });
        var markup = customer.Tier switch
        {
            CustomerTier.Wholesale =&gt; 1.03m,
            CustomerTier.Retail =&gt; 1.08m,
            CustomerTier.Premium =&gt; 1.05m,
            _ =&gt; 1.10m
        };
        return (basePrice - discount) * markup;
    }
}

// In Program.cs: use a feature flag to choose the implementation
builder.Services.AddScoped&lt;IPricingEngine&gt;(sp =&gt;
{
    var featureFlags = sp.GetRequiredService&lt;IOptions&lt;FeatureFlags&gt;&gt;().Value;
    if (featureFlags.UseNewPricingEngine)
    {
        var discountStrategy = sp.GetRequiredService&lt;IDiscountStrategy&gt;();
        return new NewPricingEngine(discountStrategy);
    }

    return new LegacyPricingEngine();
});
</code></pre>
<p>You can deploy the new pricing engine to production behind a disabled feature flag, enable it for 1% of traffic, monitor the results, ramp up gradually, and roll back instantly if anything goes wrong. All of this is possible because the consuming code depends on <code>IPricingEngine</code>, not on either concrete implementation. Without DIP, you would be doing code surgery in the consuming classes to switch between pricing strategies.</p>
<h3 id="example-3-resilient-multi-provider-services">Example 3: Resilient Multi-Provider Services</h3>
<p>DIP makes it natural to build resilience patterns where you fail over from one implementation to another:</p>
<pre><code class="language-csharp">public sealed class ResilientNotificationService : INotificationService
{
    private readonly INotificationService _primary;
    private readonly INotificationService _fallback;
    private readonly ILogger&lt;ResilientNotificationService&gt; _logger;

    public ResilientNotificationService(
        [FromKeyedServices(&quot;email&quot;)] INotificationService primary,
        [FromKeyedServices(&quot;sms&quot;)] INotificationService fallback,
        ILogger&lt;ResilientNotificationService&gt; logger)
    {
        _primary = primary;
        _fallback = fallback;
        _logger = logger;
    }

    public async Task SendOrderConfirmationAsync(
        Order order, CancellationToken ct = default)
    {
        try
        {
            await _primary.SendOrderConfirmationAsync(order, ct);
        }
        catch (Exception ex)
        {
            _logger.LogWarning(ex,
                &quot;Primary notification failed for order {OrderId}, &quot; +
                &quot;falling back to secondary&quot;, order.Id);

            await _fallback.SendOrderConfirmationAsync(order, ct);
        }
    }
}
</code></pre>
<p>The <code>ResilientNotificationService</code> is itself an <code>INotificationService</code>. It is a decorator — a pattern that relies entirely on DIP. The consuming code sees <code>INotificationService</code> and knows nothing about the resilience logic. You could stack decorators: add retry logic, add circuit breaking, add telemetry — all as decorators that implement the same interface.</p>
<h2 id="part-11-dip-and-the-other-solid-principles">Part 11: DIP and the Other SOLID Principles</h2>
<p>DIP does not exist in isolation. It works in concert with the other four SOLID principles, and understanding these relationships deepens your understanding of all five.</p>
<h3 id="single-responsibility-principle-srp">Single Responsibility Principle (SRP)</h3>
<p>SRP says a class should have one reason to change. DIP enforces this by making dependencies explicit. When you see a constructor with eight interface parameters, it is a signal that the class may have too many responsibilities. DIP does not cause this problem, but it makes it visible, which is the first step toward fixing it.</p>
<h3 id="open-closed-principle-ocp">Open-Closed Principle (OCP)</h3>
<p>OCP says a module should be open for extension but closed for modification. DIP makes this possible. If your <code>OrderProcessor</code> depends on <code>INotificationService</code>, you can extend it to support push notifications by creating a new <code>PushNotificationService</code> class and registering it — without modifying <code>OrderProcessor</code>. The class is open for extension (new notification channels) and closed for modification (existing code does not change).</p>
<h3 id="liskov-substitution-principle-lsp">Liskov Substitution Principle (LSP)</h3>
<p>LSP says that objects of a superclass should be replaceable with objects of any subclass without breaking the program. DIP relies on LSP. When the DI container hands your <code>OrderProcessor</code> an <code>EmailNotificationService</code>, the <code>OrderProcessor</code> assumes it behaves according to the <code>INotificationService</code> contract. If <code>EmailNotificationService</code> violates that contract — for example, by throwing an unexpected exception type or by having side effects not implied by the interface — then the substitution breaks. DIP provides the mechanism for substitution; LSP ensures the substitution is safe.</p>
<h3 id="interface-segregation-principle-isp">Interface Segregation Principle (ISP)</h3>
<p>ISP says that no client should be forced to depend on methods it does not use. ISP directly improves DIP by encouraging smaller, more focused interfaces. If <code>IOrderRepository</code> has twenty methods but a particular consumer only needs <code>GetByIdAsync</code>, ISP suggests splitting the interface. This makes DIP more effective because the abstraction more precisely matches what the consumer actually needs, reducing coupling further.</p>
<h2 id="part-12-dip-in-blazor-webassembly">Part 12: DIP in Blazor WebAssembly</h2>
<p>Blazor WebAssembly, the framework this very blog is built on, uses DIP extensively. The DI container works the same way as in server-side ASP.NET Core, with a few nuances.</p>
<h3 id="registering-services-in-blazor-wasm">Registering Services in Blazor WASM</h3>
<p>In a Blazor WebAssembly app, you register services in <code>Program.cs</code>:</p>
<pre><code class="language-csharp">var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.RootComponents.Add&lt;App&gt;(&quot;#app&quot;);

// Register abstractions
builder.Services.AddScoped&lt;IBlogService, StaticBlogService&gt;();
builder.Services.AddScoped&lt;IThemeService, LocalStorageThemeService&gt;();
builder.Services.AddSingleton&lt;IAnalyticsService, ConsoleAnalyticsService&gt;();

await builder.Build().RunAsync();
</code></pre>
<h3 id="injecting-in-components">Injecting in Components</h3>
<p>Blazor components receive dependencies through the <code>[Inject]</code> attribute:</p>
<pre><code class="language-razor">@page &quot;/blog&quot;
@inject IBlogService BlogService
@inject IThemeService ThemeService

&lt;h1&gt;Blog&lt;/h1&gt;

@if (posts is not null)
{
    @foreach (var post in posts)
    {
        &lt;article&gt;
            &lt;h2&gt;&lt;a href=&quot;blog/@post.Slug&quot;&gt;@post.Title&lt;/a&gt;&lt;/h2&gt;
            &lt;p&gt;@post.Summary&lt;/p&gt;
        &lt;/article&gt;
    }
}

@code {
    private BlogPostMetadata[]? posts;

    protected override async Task OnInitializedAsync()
    {
        posts = await BlogService.GetAllPostsAsync();
    }
}
</code></pre>
<p>The component depends on <code>IBlogService</code>, not on the specific implementation that fetches JSON from <code>wwwroot/blog-data/</code>. If you later want to fetch blog posts from an API instead of static files, you change the registration in <code>Program.cs</code>. The component does not change.</p>
<h3 id="scoping-in-blazor-wasm">Scoping in Blazor WASM</h3>
<p>There is an important difference in Blazor WebAssembly compared to server-side ASP.NET Core: there is no real &quot;scope&quot; in the HTTP request sense. In Blazor WASM, the app runs in the browser, and scoped services behave like singletons because there is only one &quot;scope&quot; — the app lifetime. If you register <code>DbContext</code> as scoped in Blazor Server, each circuit gets its own <code>DbContext</code>. In Blazor WASM, there is only one <code>DbContext</code> for the entire app session. Keep this in mind when designing your service lifetimes for Blazor WASM applications.</p>
<h3 id="testing-blazor-components-with-bunit">Testing Blazor Components with bUnit</h3>
<p>DIP makes Blazor components testable with bUnit. You replace the real services with fakes:</p>
<pre><code class="language-csharp">using Bunit;

public class BlogPageTests : BunitContext
{
    [Fact]
    public void BlogPage_ShouldRenderPosts()
    {
        // Arrange
        var fakeBlogService = new FakeBlogService(new[]
        {
            new BlogPostMetadata
            {
                Slug = &quot;test-post&quot;,
                Title = &quot;Test Post&quot;,
                Summary = &quot;A test summary&quot;,
                Date = new DateTime(2026, 3, 27)
            }
        });

        Services.AddSingleton&lt;IBlogService&gt;(fakeBlogService);

        // Act
        var cut = Render&lt;Blog&gt;();

        // Assert
        cut.Find(&quot;h2&quot;).MarkupMatches(&quot;&lt;h2&gt;&lt;a href=\&quot;blog/test-post\&quot;&gt;Test Post&lt;/a&gt;&lt;/h2&gt;&quot;);
        cut.Find(&quot;p&quot;).MarkupMatches(&quot;&lt;p&gt;A test summary&lt;/p&gt;&quot;);
    }
}
</code></pre>
<p>Without DIP, the <code>Blog</code> component would be hardwired to fetch JSON from <code>wwwroot/blog-data/</code>, and testing it would require a running HTTP server serving those static files. With DIP, you inject a fake that returns test data immediately.</p>
<h2 id="part-13-when-not-to-use-dip">Part 13: When Not to Use DIP</h2>
<p>DIP is a powerful tool, but like all tools, it can be misapplied. Here are situations where strict adherence to DIP is unnecessary or counterproductive.</p>
<h3 id="small-scripts-and-one-off-tools">Small Scripts and One-Off Tools</h3>
<p>If you are writing a hundred-line console app to migrate data from one format to another, and it will run once and be deleted, introducing interfaces and DI adds complexity without benefit. Write the simplest code that works. DIP is an investment in maintainability and flexibility — investments that only pay off when the code will be maintained and needs to be flexible.</p>
<h3 id="value-objects-and-dtos">Value Objects and DTOs</h3>
<p>As discussed earlier, not every type needs an interface. Value objects (<code>Money</code>, <code>Address</code>, <code>DateRange</code>), data-transfer objects (<code>OrderDto</code>, <code>CreateUserRequest</code>), and records that hold data without behavior are not candidates for DIP. They have no side effects to mock, no I/O to abstract away, and no alternative implementations to swap in.</p>
<h3 id="stable-simple-dependencies">Stable, Simple Dependencies</h3>
<p>If a dependency is stable (it will never be swapped out) and simple (it has no side effects that interfere with testing), an interface may not be necessary. For example, a static helper method that formats a phone number is not something you need to abstract. The key question is always: &quot;Does this dependency make my class hard to test or hard to change?&quot; If the answer is no, you can skip the interface.</p>
<h3 id="over-abstraction-and-abstraction-fatigue">Over-Abstraction and Abstraction Fatigue</h3>
<p>There is a real cost to abstraction. Every interface is a new file to maintain, a new type to navigate in an IDE, and a new indirection for other developers to trace through when debugging. If your codebase has more interfaces than classes, something has gone wrong. Use DIP judiciously, at the boundaries that matter, and leave the internals of each module to use concrete types freely.</p>
<p>Martin Fowler has written about this tradeoff, noting that the correct number of abstractions depends on the cost of change in your specific context. In a rapidly evolving startup codebase, fewer abstractions and more flexibility to refactor may be appropriate. In a long-lived enterprise system with multiple teams, more abstractions at boundary points prevent expensive coordination between teams.</p>
<h2 id="part-14-a-checklist-for-applying-dip-in-your.net-projects">Part 14: A Checklist for Applying DIP in Your .NET Projects</h2>
<p>Here is a practical checklist you can apply to your own codebase, whether you are starting a new project or refactoring an existing one.</p>
<p><strong>Identify your architectural boundaries.</strong> Where does your business logic end and your infrastructure begin? Draw a line. Interfaces go on the business side. Implementations go on the infrastructure side.</p>
<p><strong>Define interfaces at the boundary.</strong> For each piece of infrastructure your business logic uses — databases, APIs, file systems, message queues, caches, email services — define an interface in your application or domain layer.</p>
<p><strong>Use domain language in your interfaces.</strong> The interface should describe what the business needs, not how the infrastructure works. <code>SaveOrderAsync</code>, not <code>ExecuteSqlCommandAsync</code>. <code>SendOrderConfirmationAsync</code>, not <code>SmtpSendAsync</code>.</p>
<p><strong>Register services in one place.</strong> Your DI registrations should live in the composition root — <code>Program.cs</code> in ASP.NET Core. This is the one place that knows about concrete types and wires abstractions to implementations.</p>
<p><strong>Use constructor injection.</strong> Receive dependencies through the constructor. Avoid property injection (which makes dependencies optional and easy to forget) and service locator (which hides dependencies).</p>
<p><strong>Choose the right lifetime.</strong> Use <code>Transient</code> for lightweight, stateless services. Use <code>Scoped</code> for per-request services like <code>DbContext</code>. Use <code>Singleton</code> for expensive, thread-safe services. Never inject a shorter-lived service into a longer-lived one.</p>
<p><strong>Do not abstract what does not need abstracting.</strong> Value objects, DTOs, static helpers, and simple in-memory computations generally do not need interfaces. Abstract the things that have side effects, are expensive, or might change.</p>
<p><strong>Keep interfaces small.</strong> Prefer multiple small interfaces over one large interface. A repository with thirty methods is harder to mock and harder to implement correctly than three focused interfaces with ten methods each.</p>
<p><strong>Verify with tests.</strong> If you cannot write a fast, isolated unit test for your class, you probably have a DIP violation somewhere. The inability to mock a dependency is a signal that the dependency is concrete where it should be abstract.</p>
<p><strong>Watch for constructor bloat.</strong> If a class has more than four or five injected dependencies, it may be doing too much. Consider decomposing it into smaller, more focused classes.</p>
<h2 id="resources">Resources</h2>
<ul>
<li>Martin, Robert C. &quot;The Dependency Inversion Principle.&quot; C++ Report, May 1996. <a href="https://www.cs.utexas.edu/%7Edowning/papers/DIP-1996.pdf">PDF available at cs.utexas.edu</a></li>
<li>Martin, Robert C. &quot;Agile Software Development, Principles, Patterns, and Practices.&quot; Prentice Hall, 2002. The book that brought SOLID to a wide audience.</li>
<li>Martin, Robert C. &quot;Clean Architecture: A Craftsman's Guide to Software Structure and Design.&quot; Prentice Hall, 2017.</li>
<li>Fowler, Martin. &quot;Inversion of Control Containers and the Dependency Injection pattern.&quot; January 2004. <a href="https://martinfowler.com/articles/injection.html">martinfowler.com/articles/injection.html</a></li>
<li>Fowler, Martin. &quot;DIP in the Wild.&quot; <a href="https://martinfowler.com/articles/dipInTheWild.html">martinfowler.com/articles/dipInTheWild.html</a></li>
<li>Microsoft. &quot;Dependency injection in ASP.NET Core.&quot; <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection">learn.microsoft.com</a></li>
<li>Microsoft. &quot;Dependency injection — .NET.&quot; <a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection">learn.microsoft.com</a></li>
<li>Seemann, Mark. &quot;Dependency Injection Principles, Practices, and Patterns.&quot; Manning Publications, 2019. The definitive book on DI in .NET.</li>
<li>Cockburn, Alistair. &quot;Hexagonal Architecture.&quot; <a href="https://alistair.cockburn.us/hexagonal-architecture/">alistair.cockburn.us</a></li>
</ul>
]]></content:encoded>
      <category>dotnet</category>
      <category>csharp</category>
      <category>solid</category>
      <category>architecture</category>
      <category>dependency-injection</category>
      <category>best-practices</category>
      <category>deep-dive</category>
      <category>testing</category>
      <category>aspnet</category>
    </item>
    <item>
      <title>The Interface Segregation Principle: A Complete Guide for .NET Developers</title>
      <link>https://observermagazine.github.io/blog/interface-segregation</link>
      <description>A deep dive into the Interface Segregation Principle (ISP), the 'I' in SOLID. Covers the origin story at Xerox, what ISP really means (and what it does not mean), how it manifests in the .NET Base Class Library, practical C# refactoring walkthroughs, its relationship to the other SOLID principles, and how to apply it in modern .NET 10 applications.</description>
      <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/interface-segregation</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<p>Picture this. You are six months into building a document management system. The <code>IDocumentService</code> interface started with three methods — <code>Upload</code>, <code>Download</code>, and <code>Delete</code>. Reasonable enough. Then the PM asked for versioning. Then someone needed OCR text extraction. Then the compliance team wanted audit trails. Then the mobile team needed thumbnail generation. Now your interface has fourteen methods, and every class that implements it — the local file store, the Azure Blob adapter, the in-memory test double — must carry the weight of all fourteen, even though most of them use only three or four. Every time you add a method, you touch every implementation. Every time you touch every implementation, you risk breaking something that was already working. You are living inside a violation of the Interface Segregation Principle, and you might not even know it yet.</p>
<p>This article will take you from the origin story of the ISP, through the theory, into the .NET Base Class Library where Microsoft themselves struggled with it, through practical C# refactoring examples, and finally into the modern .NET 10 world of default interface methods, minimal APIs, and microservice boundaries. By the end, you will have a mental model for recognizing fat interfaces, a toolkit for breaking them apart, and the judgment to know when to stop splitting.</p>
<h2 id="part-1-the-origin-story-a-printer-a-fat-class-and-an-hour-long-build">Part 1: The Origin Story — A Printer, a Fat Class, and an Hour-Long Build</h2>
<p>The Interface Segregation Principle was not conceived in an ivory tower. It was born out of pain at Xerox in the early 1990s.</p>
<p>Robert C. Martin — universally known as Uncle Bob — was consulting for Xerox on a new multifunction printer system. This printer could print, staple, fax, and collate. The software driving it had been built from scratch. At the heart of the system sat a single <code>Job</code> class. Every task — print jobs, staple jobs, fax jobs — went through this one class. The <code>Job</code> class knew about every operation the printer could perform.</p>
<p>As the system grew, the <code>Job</code> class grew with it. It accumulated methods for every conceivable operation. And here is where the real damage showed up: because every module in the system depended on this single class, even the tiniest change to a fax-related method triggered a recompilation of the stapling module, the printing module, and everything else. The build cycle ballooned to an hour. Development became nearly impossible. A one-line fix to fax retry logic meant every developer on the team had to wait an hour before they could test anything.</p>
<p>Martin's solution was to insert interfaces between the <code>Job</code> class and its clients. Instead of every module depending directly on the monolithic <code>Job</code> class, each module would depend on a narrow interface tailored to its needs. A <code>StapleJob</code> interface exposed only the methods the stapling module needed. A <code>PrintJob</code> interface exposed only the methods the printing module needed. The <code>Job</code> class still implemented all of those interfaces — it still contained the actual logic — but the modules no longer knew about each other's methods. A change to a fax method no longer triggered recompilation of the stapling code, because the stapling code did not depend on the fax interface.</p>
<p>This was the moment the Interface Segregation Principle crystallized. Martin later formulated it as a single sentence:</p>
<p><strong>&quot;Clients should not be forced to depend on methods they do not use.&quot;</strong></p>
<p>He published the principle formally in his 2002 book <em>Agile Software Development: Principles, Patterns, and Practices</em>, and it became the &quot;I&quot; in the SOLID acronym (coined by Michael Feathers around 2004). But the underlying insight predates the book by nearly a decade. It was born on a factory floor, from a real system with real build times that had become real obstacles.</p>
<h2 id="part-2-what-the-isp-actually-says-and-what-it-does-not-say">Part 2: What the ISP Actually Says (and What It Does Not Say)</h2>
<p>The ISP is frequently misunderstood. Let us be precise about what it claims and what it does not.</p>
<h3 id="what-isp-says">What ISP says</h3>
<p>An interface should be designed from the perspective of its clients. If two clients use different subsets of an interface's methods, those subsets should be expressed as separate interfaces. The goal is to prevent a change demanded by one client from rippling through to another client that does not care about that change.</p>
<p>Think of it like a restaurant menu. A vegetarian diner and a meat-loving diner both eat at the same restaurant. If the restaurant hands them a single menu that is 40 pages long, the vegetarian has to flip past 30 pages of steak and pork to find the three salad options. Worse, if the chef changes the steak section, the vegetarian's menu is reprinted too. A better design: give the vegetarian a focused vegetarian menu and the carnivore a focused carnivore menu. The kitchen (the implementing class) still prepares all the dishes, but each diner (client) only sees what is relevant to them.</p>
<h3 id="what-isp-does-not-say">What ISP does not say</h3>
<p><strong>ISP does not say every interface should have one method.</strong> This is a common over-application. An interface with five methods is perfectly fine if every client that depends on it uses all five. The principle is about unused dependencies, not about counting methods. An <code>ILogger</code> with <code>LogDebug</code>, <code>LogInformation</code>, <code>LogWarning</code>, <code>LogError</code>, and <code>LogCritical</code> is not an ISP violation if every consumer of the logger calls all five methods (or at least could reasonably call any of them).</p>
<p><strong>ISP is not the same as the Single Responsibility Principle (SRP).</strong> SRP says a class should have one reason to change. ISP says a client should not depend on methods it does not use. They are related but distinct. You can violate ISP without violating SRP, and vice versa. An interface might have a single responsibility (managing user accounts) but still be too fat for certain clients (a reporting module that only needs to read user names).</p>
<p><strong>ISP is not about <code>NotImplementedException</code>.</strong> If a class implements an interface and throws <code>NotImplementedException</code> for some methods, that is a Liskov Substitution Principle (LSP) violation, not an ISP violation per se. ISP focuses on the client side — what the consuming class is forced to depend on — not the implementing side. Of course, in practice, the two often appear together. A fat interface leads to implementations that cannot fully honor the contract, which is both an ISP smell and an LSP violation. But they are distinct diagnoses.</p>
<p><strong>ISP is not limited to the C# <code>interface</code> keyword.</strong> The principle applies to any abstraction boundary. A class with twenty public methods where different consumers use different subsets is an ISP problem even if no <code>interface</code> keyword is in sight. Abstract classes, base classes, and even module APIs in microservice architectures can all exhibit fat-interface problems.</p>
<h3 id="the-precise-formulation">The precise formulation</h3>
<p>Uncle Bob later refined the principle in his article on the topic: when a client depends on a class that contains methods the client does not use, but that other clients do use, then that client will be affected by the changes those other clients force upon the class. The clients become indirectly coupled to each other through the shared interface, even though they have no direct relationship.</p>
<h2 id="part-3-isp-in-the.net-base-class-library">Part 3: ISP in the .NET Base Class Library</h2>
<p>The .NET BCL is a fascinating study in interface segregation — both its successes and its historical failures. The designers of the framework have been wrestling with ISP since .NET 1.0, and the evolution of collection interfaces tells the story better than any textbook.</p>
<h3 id="the-ilist-problem">The IList problem</h3>
<p>Consider <code>IList&lt;T&gt;</code>. It defines methods for reading (<code>this[int index]</code>, <code>IndexOf</code>), adding (<code>Add</code>, <code>Insert</code>), removing (<code>Remove</code>, <code>RemoveAt</code>), and clearing (<code>Clear</code>). If your code only needs to iterate over a collection, depending on <code>IList&lt;T&gt;</code> forces you to carry the conceptual weight of all those mutation methods. Your class is now coupled to the idea that collections can be modified, even if your code never modifies anything.</p>
<p>Worse, <code>Array</code> in .NET implements <code>IList&lt;T&gt;</code>. But arrays have a fixed size. Calling <code>Add</code> on an array throws <code>NotSupportedException</code>. This is a textbook LSP violation that exists precisely because of an ISP problem: <code>IList&lt;T&gt;</code> bundles reading and writing into a single contract, forcing fixed-size collections to implement methods they cannot meaningfully support.</p>
<h3 id="the-read-only-interfaces-arrive-in.net-4.5">The read-only interfaces arrive in .NET 4.5</h3>
<p>For years, .NET developers asked Microsoft for read-only collection interfaces. The BCL team initially declined, arguing that the value did not justify the added complexity. Then WinRT arrived. The Windows Runtime exposed <code>IVectorView&lt;T&gt;</code> and <code>IMapView&lt;K, V&gt;</code>, and .NET needed corresponding types for interop. This external pressure finally pushed the team to introduce <code>IReadOnlyCollection&lt;T&gt;</code> and <code>IReadOnlyList&lt;T&gt;</code> in .NET 4.5.</p>
<p>The result is a textbook application of ISP:</p>
<pre><code class="language-csharp">// IEnumerable&lt;T&gt; — forward-only iteration, nothing more
public interface IEnumerable&lt;out T&gt; : IEnumerable
{
    IEnumerator&lt;T&gt; GetEnumerator();
}

// IReadOnlyCollection&lt;T&gt; — iteration plus a count
public interface IReadOnlyCollection&lt;out T&gt; : IEnumerable&lt;T&gt;
{
    int Count { get; }
}

// IReadOnlyList&lt;T&gt; — iteration, count, and indexed access
public interface IReadOnlyList&lt;out T&gt; : IReadOnlyCollection&lt;T&gt;
{
    T this[int index] { get; }
}

// ICollection&lt;T&gt; — adds mutation (Add, Remove, Clear)
public interface ICollection&lt;T&gt; : IEnumerable&lt;T&gt;
{
    int Count { get; }
    bool IsReadOnly { get; }
    void Add(T item);
    void Clear();
    bool Contains(T item);
    void CopyTo(T[] array, int arrayIndex);
    bool Remove(T item);
}

// IList&lt;T&gt; — adds indexed mutation (Insert, RemoveAt, indexer set)
public interface IList&lt;T&gt; : ICollection&lt;T&gt;
{
    T this[int index] { get; set; }
    int IndexOf(T item);
    void Insert(int index, T item);
    void RemoveAt(int index);
}
</code></pre>
<p>Notice the hierarchy. Each interface adds a narrow slice of capability. A method that only needs to iterate takes <code>IEnumerable&lt;T&gt;</code>. A method that also needs a count takes <code>IReadOnlyCollection&lt;T&gt;</code>. A method that needs indexed access takes <code>IReadOnlyList&lt;T&gt;</code>. And only a method that genuinely needs to mutate the collection takes <code>ICollection&lt;T&gt;</code> or <code>IList&lt;T&gt;</code>. This is ISP in action: each client depends only on the capability it actually uses.</p>
<h3 id="the-iqueryable-hierarchy">The IQueryable hierarchy</h3>
<p>LINQ provides another beautiful example. <code>IQueryable&lt;T&gt;</code> inherits from <code>IEnumerable&lt;T&gt;</code>, <code>IQueryable</code>, and <code>IEnumerable</code>. The capability of iterating over a collection is segregated from the capability of evaluating expression trees against a query provider. Code that only needs to iterate depends on <code>IEnumerable&lt;T&gt;</code>. Code that needs to build and translate expression trees depends on <code>IQueryable&lt;T&gt;</code>. The consuming code declares exactly the level of capability it requires.</p>
<h3 id="stream-and-the-canread-canwrite-pattern">Stream and the CanRead / CanWrite pattern</h3>
<p>The <code>System.IO.Stream</code> class takes a different approach to the same problem. Rather than segregating into multiple interfaces, <code>Stream</code> uses capability flags: <code>CanRead</code>, <code>CanWrite</code>, <code>CanSeek</code>, and <code>CanTimeout</code>. Callers check these flags before invoking read or write operations.</p>
<p>This is a pragmatic compromise. A strict ISP application would split <code>Stream</code> into <code>IReadableStream</code>, <code>IWritableStream</code>, <code>ISeekableStream</code>, and various combinations. The BCL team decided that the combinatorial explosion of interfaces was worse than the capability-flag approach. This is a valid engineering trade-off, and it reminds us that ISP is a principle, not a law. Sometimes the cure is worse than the disease.</p>
<h3 id="the-practical-guideline-for.net-collection-types">The practical guideline for .NET collection types</h3>
<p>A widely-accepted guideline in modern .NET follows directly from ISP:</p>
<p><strong>Accept the most general type you can. Return the most specific type you can.</strong></p>
<p>For method parameters, prefer <code>IEnumerable&lt;T&gt;</code> (the most general). For return types, prefer <code>IReadOnlyList&lt;T&gt;</code> (the most specific read-only indexed collection). This way, callers of your method get the richest possible contract without mutation capability, and your method accepts the widest possible range of inputs.</p>
<pre><code class="language-csharp">// Good: accepts IEnumerable&lt;T&gt;, returns IReadOnlyList&lt;T&gt;
public IReadOnlyList&lt;Customer&gt; FilterActive(IEnumerable&lt;Customer&gt; customers)
{
    return customers.Where(c =&gt; c.IsActive).ToList();
}

// Bad: accepts List&lt;Customer&gt; (too specific), returns IEnumerable&lt;Customer&gt; (too vague)
public IEnumerable&lt;Customer&gt; FilterActive(List&lt;Customer&gt; customers)
{
    return customers.Where(c =&gt; c.IsActive);
}
</code></pre>
<h2 id="part-4-recognizing-fat-interfaces-in-your-own-code">Part 4: Recognizing Fat Interfaces in Your Own Code</h2>
<p>Before you can fix an ISP violation, you need to spot one. Here are the telltale signs, ordered from obvious to subtle.</p>
<h3 id="sign-1-notimplementedexception-or-notsupportedexception">Sign 1: NotImplementedException or NotSupportedException</h3>
<p>This is the most glaring symptom. If a class implements an interface and some methods throw <code>NotImplementedException</code>, one of two things is happening: the implementation is incomplete (a temporary state), or the interface is too broad for this class. If it is the latter, you have an ISP problem on the implementing side and almost certainly an LSP problem on the consuming side.</p>
<pre><code class="language-csharp">// Smells like ISP violation
public class ReadOnlyProductStore : IProductStore
{
    public Product GetById(int id) { /* works fine */ }
    public IReadOnlyList&lt;Product&gt; GetAll() { /* works fine */ }
    public void Add(Product product) =&gt; throw new NotSupportedException();
    public void Update(Product product) =&gt; throw new NotSupportedException();
    public void Delete(int id) =&gt; throw new NotSupportedException();
}
</code></pre>
<p>The <code>ReadOnlyProductStore</code> is telling you that it does not belong behind the <code>IProductStore</code> interface. It needs a read-only interface.</p>
<h3 id="sign-2-clients-that-only-use-a-subset-of-methods">Sign 2: Clients that only use a subset of methods</h3>
<p>Open any class that depends on an interface. Count the methods it actually calls. If it calls three out of twelve, the interface is too fat for this client. This is the canonical ISP violation, and it is far more common than the <code>NotImplementedException</code> variant.</p>
<pre><code class="language-csharp">public class ProductReportGenerator
{
    private readonly IProductRepository _repository;

    public ProductReportGenerator(IProductRepository repository)
    {
        _repository = repository;
    }

    public Report Generate()
    {
        // Only calls GetAll and GetById — never Add, Update, or Delete
        var products = _repository.GetAll();
        // ... build report ...
    }
}
</code></pre>
<p>The <code>ProductReportGenerator</code> depends on <code>IProductRepository</code> but only uses the read methods. It is coupled to the write methods unnecessarily. If someone adds a <code>BulkDelete</code> method to <code>IProductRepository</code>, the <code>ProductReportGenerator</code> is affected by the change even though it never deletes anything.</p>
<h3 id="sign-3-mock-objects-in-tests-that-have-many-setup-calls-for-unused-methods">Sign 3: Mock objects in tests that have many Setup calls for unused methods</h3>
<p>When you write unit tests using a mocking framework, pay attention to how many <code>Setup</code> or <code>Returns</code> calls you need. If you are setting up eight methods on a mock but the code under test only calls two, that is a strong signal that the interface is too fat.</p>
<pre><code class="language-csharp">// If you find yourself writing this:
var mock = new Mock&lt;IDocumentService&gt;();
mock.Setup(x =&gt; x.Upload(It.IsAny&lt;Document&gt;())).Returns(Task.CompletedTask);
mock.Setup(x =&gt; x.Download(It.IsAny&lt;int&gt;())).Returns(Task.FromResult(doc));
mock.Setup(x =&gt; x.Delete(It.IsAny&lt;int&gt;())).Returns(Task.CompletedTask);
mock.Setup(x =&gt; x.ExtractText(It.IsAny&lt;int&gt;())).Returns(Task.FromResult(&quot;&quot;));
mock.Setup(x =&gt; x.GenerateThumbnail(It.IsAny&lt;int&gt;())).Returns(Task.FromResult(thumb));
// ... but the class under test only calls Download()
// ... you have an ISP problem.
</code></pre>
<h3 id="sign-4-frequent-recompilation-of-unrelated-code">Sign 4: Frequent recompilation of unrelated code</h3>
<p>This was the original symptom at Xerox and it remains relevant today, especially in large solutions with many projects. If modifying an interface in one assembly forces recompilation of assemblies that do not use the changed method, you are experiencing the ISP violation's original pain point. In a modern .NET solution, this manifests as unnecessarily long <code>dotnet build</code> times and spurious CI failures in projects that should not be affected by the change.</p>
<h3 id="sign-5-interface-names-that-are-vague-or-overly-general">Sign 5: Interface names that are vague or overly general</h3>
<p>Names like <code>IService</code>, <code>IManager</code>, <code>IHandler</code>, or <code>IRepository</code> (without any qualifier) are often signs that the interface is trying to be everything to everyone. A well-segregated interface has a name that tells you exactly what it does: <code>IProductReader</code>, <code>IOrderWriter</code>, <code>IAuditLogger</code>, <code>IThumbnailGenerator</code>.</p>
<h2 id="part-5-refactoring-fat-interfaces-a-step-by-step-walkthrough">Part 5: Refactoring Fat Interfaces — A Step-by-Step Walkthrough</h2>
<p>Let us take a realistic example and walk through the refactoring from a fat interface to well-segregated ones. We will use a scenario familiar to .NET web developers: a user repository.</p>
<h3 id="the-starting-point-a-fat-iuserrepository">The starting point: a fat IUserRepository</h3>
<pre><code class="language-csharp">public interface IUserRepository
{
    // Read operations
    Task&lt;User?&gt; GetByIdAsync(int id);
    Task&lt;User?&gt; GetByEmailAsync(string email);
    Task&lt;IReadOnlyList&lt;User&gt;&gt; GetAllAsync();
    Task&lt;IReadOnlyList&lt;User&gt;&gt; SearchAsync(string query);

    // Write operations
    Task AddAsync(User user);
    Task UpdateAsync(User user);
    Task DeleteAsync(int id);

    // Bulk operations
    Task BulkImportAsync(IEnumerable&lt;User&gt; users);
    Task BulkDeleteAsync(IEnumerable&lt;int&gt; ids);

    // Reporting
    Task&lt;int&gt; GetTotalCountAsync();
    Task&lt;IReadOnlyList&lt;User&gt;&gt; GetRecentlyActiveAsync(DateTime since);
    Task&lt;Dictionary&lt;string, int&gt;&gt; GetRegistrationsByMonthAsync(int year);
}
</code></pre>
<p>Thirteen methods. Not enormous by real-world standards, but let us look at who actually calls what.</p>
<p>The <strong>web API controllers</strong> use <code>GetByIdAsync</code>, <code>GetAllAsync</code>, <code>SearchAsync</code>, <code>AddAsync</code>, <code>UpdateAsync</code>, and <code>DeleteAsync</code>. The <strong>admin bulk import tool</strong> uses <code>BulkImportAsync</code> and <code>BulkDeleteAsync</code>. The <strong>dashboard widget</strong> uses <code>GetTotalCountAsync</code>, <code>GetRecentlyActiveAsync</code>, and <code>GetRegistrationsByMonthAsync</code>. The <strong>authentication middleware</strong> uses only <code>GetByEmailAsync</code>.</p>
<p>Four clients, four different subsets. Every client is coupled to every other client's methods.</p>
<h3 id="step-1-identify-the-client-groups">Step 1: Identify the client groups</h3>
<p>Group the methods by which clients use them:</p>
<ul>
<li><strong>Read (single)</strong>: <code>GetByIdAsync</code>, <code>GetByEmailAsync</code> — used by controllers and auth middleware</li>
<li><strong>Read (collection)</strong>: <code>GetAllAsync</code>, <code>SearchAsync</code> — used by controllers</li>
<li><strong>Write</strong>: <code>AddAsync</code>, <code>UpdateAsync</code>, <code>DeleteAsync</code> — used by controllers</li>
<li><strong>Bulk</strong>: <code>BulkImportAsync</code>, <code>BulkDeleteAsync</code> — used by admin tool</li>
<li><strong>Reporting</strong>: <code>GetTotalCountAsync</code>, <code>GetRecentlyActiveAsync</code>, <code>GetRegistrationsByMonthAsync</code> — used by dashboard</li>
</ul>
<h3 id="step-2-define-focused-interfaces">Step 2: Define focused interfaces</h3>
<pre><code class="language-csharp">public interface IUserReader
{
    Task&lt;User?&gt; GetByIdAsync(int id);
    Task&lt;User?&gt; GetByEmailAsync(string email);
    Task&lt;IReadOnlyList&lt;User&gt;&gt; GetAllAsync();
    Task&lt;IReadOnlyList&lt;User&gt;&gt; SearchAsync(string query);
}

public interface IUserWriter
{
    Task AddAsync(User user);
    Task UpdateAsync(User user);
    Task DeleteAsync(int id);
}

public interface IUserBulkOperations
{
    Task BulkImportAsync(IEnumerable&lt;User&gt; users);
    Task BulkDeleteAsync(IEnumerable&lt;int&gt; ids);
}

public interface IUserReporting
{
    Task&lt;int&gt; GetTotalCountAsync();
    Task&lt;IReadOnlyList&lt;User&gt;&gt; GetRecentlyActiveAsync(DateTime since);
    Task&lt;Dictionary&lt;string, int&gt;&gt; GetRegistrationsByMonthAsync(int year);
}
</code></pre>
<h3 id="step-3-optionally-compose-larger-interfaces">Step 3: Optionally compose larger interfaces</h3>
<p>If some clients genuinely need both reading and writing, you can compose:</p>
<pre><code class="language-csharp">public interface IUserRepository : IUserReader, IUserWriter { }
</code></pre>
<p>This is a common and idiomatic C# pattern. The web API controllers can depend on <code>IUserRepository</code> (which gives them read and write), while the dashboard depends only on <code>IUserReporting</code>, and the auth middleware depends only on <code>IUserReader</code>.</p>
<h3 id="step-4-update-the-implementing-class">Step 4: Update the implementing class</h3>
<p>The implementing class does not change much. It simply declares that it implements all the interfaces:</p>
<pre><code class="language-csharp">public class SqlUserRepository : IUserRepository, IUserBulkOperations, IUserReporting
{
    private readonly AppDbContext _db;

    public SqlUserRepository(AppDbContext db) =&gt; _db = db;

    // IUserReader
    public async Task&lt;User?&gt; GetByIdAsync(int id)
        =&gt; await _db.Users.FindAsync(id);

    public async Task&lt;User?&gt; GetByEmailAsync(string email)
        =&gt; await _db.Users.FirstOrDefaultAsync(u =&gt; u.Email == email);

    public async Task&lt;IReadOnlyList&lt;User&gt;&gt; GetAllAsync()
        =&gt; await _db.Users.OrderBy(u =&gt; u.Name).ToListAsync();

    public async Task&lt;IReadOnlyList&lt;User&gt;&gt; SearchAsync(string query)
        =&gt; await _db.Users.Where(u =&gt; u.Name.Contains(query)).ToListAsync();

    // IUserWriter
    public async Task AddAsync(User user)
    {
        _db.Users.Add(user);
        await _db.SaveChangesAsync();
    }

    public async Task UpdateAsync(User user)
    {
        _db.Users.Update(user);
        await _db.SaveChangesAsync();
    }

    public async Task DeleteAsync(int id)
    {
        var user = await _db.Users.FindAsync(id);
        if (user is not null)
        {
            _db.Users.Remove(user);
            await _db.SaveChangesAsync();
        }
    }

    // IUserBulkOperations
    public async Task BulkImportAsync(IEnumerable&lt;User&gt; users)
    {
        _db.Users.AddRange(users);
        await _db.SaveChangesAsync();
    }

    public async Task BulkDeleteAsync(IEnumerable&lt;int&gt; ids)
    {
        var users = await _db.Users.Where(u =&gt; ids.Contains(u.Id)).ToListAsync();
        _db.Users.RemoveRange(users);
        await _db.SaveChangesAsync();
    }

    // IUserReporting
    public async Task&lt;int&gt; GetTotalCountAsync()
        =&gt; await _db.Users.CountAsync();

    public async Task&lt;IReadOnlyList&lt;User&gt;&gt; GetRecentlyActiveAsync(DateTime since)
        =&gt; await _db.Users.Where(u =&gt; u.LastActiveAt &gt;= since).ToListAsync();

    public async Task&lt;Dictionary&lt;string, int&gt;&gt; GetRegistrationsByMonthAsync(int year)
        =&gt; await _db.Users
            .Where(u =&gt; u.CreatedAt.Year == year)
            .GroupBy(u =&gt; u.CreatedAt.Month)
            .ToDictionaryAsync(
                g =&gt; g.Key.ToString(&quot;D2&quot;),
                g =&gt; g.Count());
}
</code></pre>
<p>The class is the same size it was before. The difference is in how it is consumed. Each client now depends on exactly the interface it needs.</p>
<h3 id="step-5-register-in-di">Step 5: Register in DI</h3>
<p>In your <code>Program.cs</code> or DI configuration:</p>
<pre><code class="language-csharp">builder.Services.AddScoped&lt;SqlUserRepository&gt;();
builder.Services.AddScoped&lt;IUserReader&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
builder.Services.AddScoped&lt;IUserWriter&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
builder.Services.AddScoped&lt;IUserRepository&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
builder.Services.AddScoped&lt;IUserBulkOperations&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
builder.Services.AddScoped&lt;IUserReporting&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
</code></pre>
<p>Now each class can request exactly the interface it needs through constructor injection:</p>
<pre><code class="language-csharp">// The dashboard only sees reporting methods
public class DashboardService
{
    private readonly IUserReporting _reporting;
    public DashboardService(IUserReporting reporting) =&gt; _reporting = reporting;
}

// The auth middleware only sees read methods
public class AuthenticationHandler
{
    private readonly IUserReader _users;
    public AuthenticationHandler(IUserReader users) =&gt; _users = users;
}

// The admin tool only sees bulk operations
public class BulkImportService
{
    private readonly IUserBulkOperations _bulk;
    public BulkImportService(IUserBulkOperations bulk) =&gt; _bulk = bulk;
}
</code></pre>
<h3 id="the-payoff">The payoff</h3>
<p>After this refactoring, consider what happens when the reporting team asks for a new method, <code>GetChurnRateAsync</code>. You add it to <code>IUserReporting</code> and implement it in <code>SqlUserRepository</code>. The auth middleware, the web controllers, and the admin tool are completely unaffected. They do not depend on <code>IUserReporting</code>. Their interfaces have not changed. Their tests do not need to be updated. Their assemblies do not need to be recompiled (in a multi-project solution). This is precisely the decoupling the ISP was designed to achieve.</p>
<h2 id="part-6-isp-and-the-other-solid-principles">Part 6: ISP and the Other SOLID Principles</h2>
<p>The SOLID principles are not isolated rules. They interact with and reinforce each other. Understanding how ISP relates to the other four helps you apply all of them more effectively.</p>
<h3 id="isp-and-single-responsibility-principle-srp">ISP and Single Responsibility Principle (SRP)</h3>
<p>SRP says a class should have one reason to change. ISP says a client should not depend on methods it does not use. In practice, a fat interface often indicates that the implementing class has multiple responsibilities. Splitting the interface along ISP lines frequently reveals SRP violations in the implementation, too. The user repository refactoring above hints at this: the reporting queries are a conceptually different responsibility from the CRUD operations. In a mature system, you might split them into separate classes behind separate interfaces.</p>
<p>But they can diverge. An interface might be fat for ISP purposes while the implementing class is perfectly SRP-compliant. Consider a <code>JsonSerializer</code> interface with methods for serialization and deserialization. Both operations are the same responsibility (JSON conversion), but a client that only serializes does not need the deserialization methods. That is an ISP concern, not an SRP concern.</p>
<h3 id="isp-and-openclosed-principle-ocp">ISP and Open/Closed Principle (OCP)</h3>
<p>OCP says software entities should be open for extension but closed for modification. Fat interfaces make OCP harder to follow because adding a method to an interface is a modification that forces changes in every implementation. Well-segregated interfaces are easier to extend: you can add new interfaces for new capabilities without modifying existing ones.</p>
<h3 id="isp-and-liskov-substitution-principle-lsp">ISP and Liskov Substitution Principle (LSP)</h3>
<p>ISP and LSP are two sides of the same coin. ISP prevents clients from depending on methods they do not use (the client perspective). LSP prevents implementations from failing to honor the contract (the implementation perspective). Fat interfaces lead to both problems: the client depends on too much, and the implementation throws <code>NotSupportedException</code> for things it cannot do. Fix the ISP violation, and the LSP violation often disappears automatically. <code>Array</code> implementing <code>IList&lt;T&gt;</code> is the canonical example: the ISP violation (forcing array consumers to see <code>Add</code>) directly causes the LSP violation (<code>Add</code> throwing an exception).</p>
<h3 id="isp-and-dependency-inversion-principle-dip">ISP and Dependency Inversion Principle (DIP)</h3>
<p>DIP says high-level modules should not depend on low-level modules; both should depend on abstractions. ISP refines this: the abstractions themselves should be well-designed. A fat abstraction is not much better than a concrete dependency. DIP tells you to use interfaces. ISP tells you to make those interfaces the right size.</p>
<h2 id="part-7-isp-in-asp.net-core-and-modern.net">Part 7: ISP in ASP.NET Core and Modern .NET</h2>
<p>Modern .NET and ASP.NET Core provide several features and patterns that interact directly with ISP.</p>
<h3 id="dependency-injection-and-interface-per-concern">Dependency injection and interface-per-concern</h3>
<p>ASP.NET Core's built-in DI container makes ISP natural to apply. You register services by interface, and each consumer requests only the interface it needs. The DI container resolves everything at runtime. This is exactly what we showed in the user repository example above.</p>
<p>A particularly powerful pattern is registering a single implementation class behind multiple interfaces:</p>
<pre><code class="language-csharp">// Register the concrete type once
builder.Services.AddScoped&lt;SqlUserRepository&gt;();

// Forward each interface to the same instance
builder.Services.AddScoped&lt;IUserReader&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
builder.Services.AddScoped&lt;IUserWriter&gt;(sp =&gt; sp.GetRequiredService&lt;SqlUserRepository&gt;());
</code></pre>
<p>This preserves ISP at the consumer level while keeping a single implementation at the runtime level. The consumer sees a narrow interface; the container provides the full implementation.</p>
<h3 id="minimal-apis-and-endpoint-specific-dependencies">Minimal APIs and endpoint-specific dependencies</h3>
<p>ASP.NET Core minimal APIs encourage you to inject dependencies directly into endpoint handlers rather than into controller classes. This makes ISP violations more visible, because each handler declares exactly what it needs:</p>
<pre><code class="language-csharp">app.MapGet(&quot;/users/{id}&quot;, async (int id, IUserReader reader) =&gt;
{
    var user = await reader.GetByIdAsync(id);
    return user is not null ? Results.Ok(user) : Results.NotFound();
});

app.MapPost(&quot;/users&quot;, async (User user, IUserWriter writer) =&gt;
{
    await writer.AddAsync(user);
    return Results.Created($&quot;/users/{user.Id}&quot;, user);
});

app.MapGet(&quot;/dashboard/stats&quot;, async (IUserReporting reporting) =&gt;
{
    var count = await reporting.GetTotalCountAsync();
    return Results.Ok(new { TotalUsers = count });
});
</code></pre>
<p>Each endpoint depends on exactly the interface it needs. There is no controller class pulling in twelve dependencies that different action methods use in different combinations. Minimal APIs make ISP almost effortless.</p>
<h3 id="default-interface-methods-c-8">Default interface methods (C# 8+)</h3>
<p>C# 8 introduced default interface methods (DIMs), which let you add methods to an interface with a default implementation, so existing implementing classes are not forced to change.</p>
<pre><code class="language-csharp">public interface IUserReader
{
    Task&lt;User?&gt; GetByIdAsync(int id);
    Task&lt;User?&gt; GetByEmailAsync(string email);
    Task&lt;IReadOnlyList&lt;User&gt;&gt; GetAllAsync();
    Task&lt;IReadOnlyList&lt;User&gt;&gt; SearchAsync(string query);

    // Default implementation — existing implementers are not forced to provide this
    Task&lt;bool&gt; ExistsAsync(int id)
        =&gt; GetByIdAsync(id).ContinueWith(t =&gt; t.Result is not null);
}
</code></pre>
<p>DIMs can mitigate ISP pressure by allowing you to grow an interface without breaking existing implementations. But they are not a substitute for proper segregation. If different clients need fundamentally different subsets of an interface, no amount of default methods will fix the coupling. DIMs are best used for adding convenience methods that build on existing methods, not for bolting unrelated capabilities onto an interface.</p>
<h3 id="the-ihost-and-ihostbuilder-interfaces">The IHost and IHostBuilder interfaces</h3>
<p>ASP.NET Core's hosting model itself demonstrates ISP. The <code>IHost</code> interface is deliberately narrow: <code>StartAsync</code>, <code>StopAsync</code>, <code>Dispose</code>, and a <code>Services</code> property. The builder (<code>IHostBuilder</code>) is separate. Configuration, logging, and DI are all configured through the builder, not through the host. The running host exposes only what running code needs. This separation allows different consumers (health check probes, graceful shutdown handlers, background services) to depend on the narrow <code>IHost</code> interface without being coupled to the builder's configuration API.</p>
<h2 id="part-8-isp-beyond-oop-microservices-apis-and-event-driven-systems">Part 8: ISP Beyond OOP — Microservices, APIs, and Event-Driven Systems</h2>
<p>The ISP is not limited to C# interfaces in a single codebase. The same principle applies at architectural boundaries.</p>
<h3 id="rest-api-design">REST API design</h3>
<p>A REST API is an interface in the broadest sense. If you expose a single <code>/api/users</code> endpoint that supports GET, POST, PUT, DELETE, PATCH, and a dozen query parameters, every consumer of that API is coupled to the full surface area. A consumer that only reads user data still needs to understand the write endpoints exist (at minimum, to ignore them). If you version the API and change a write endpoint, read-only consumers must still validate that nothing they depend on has changed.</p>
<p>API segregation looks like this: separate read endpoints from write endpoints, or even separate them into distinct services. A read-optimized service with caching sits behind <code>/api/users/query</code>, while a write service with validation and event publishing sits behind <code>/api/users/command</code>. This is the CQRS (Command Query Responsibility Segregation) pattern, and it is ISP applied at the service boundary.</p>
<h3 id="message-contracts-in-event-driven-systems">Message contracts in event-driven systems</h3>
<p>In an event-driven architecture, messages are interfaces. If you define a single <code>UserEvent</code> class with fields for creation, update, deletion, and password reset, every subscriber must deserialize and ignore the fields it does not care about. Worse, if you add a field for a new event type, every subscriber's deserialization might break.</p>
<p>ISP-compliant event design uses separate event types: <code>UserCreatedEvent</code>, <code>UserUpdatedEvent</code>, <code>UserDeletedEvent</code>, <code>UserPasswordResetEvent</code>. Each subscriber handles only the events it cares about. This is exactly the ISP applied to message contracts.</p>
<h3 id="grpc-service-definitions">gRPC service definitions</h3>
<p>gRPC uses Protocol Buffers to define service contracts. A <code>.proto</code> file with 30 RPC methods in a single service definition is a fat interface. Clients generated from this proto file will have stubs for all 30 methods, even if they only call two. The idiomatic gRPC approach is to define multiple, focused service definitions in separate <code>.proto</code> files (or at least separate <code>service</code> blocks within the same file). This keeps the generated client code lean and reduces the coupling between different consumers.</p>
<h2 id="part-9-common-pitfalls-and-how-to-avoid-them">Part 9: Common Pitfalls and How to Avoid Them</h2>
<h3 id="pitfall-1-over-segregation">Pitfall 1: Over-segregation</h3>
<p>The most common mistake when learning ISP is splitting interfaces too aggressively. If you end up with one interface per method, you have not improved anything. You have just traded one problem (fat interfaces) for another (a proliferation of micro-interfaces that are individually meaningless and collectively confusing).</p>
<p>The rule of thumb: split when different clients use different subsets. If every client uses every method, there is nothing to split. If you find yourself creating <code>ICanAdd</code>, <code>ICanDelete</code>, <code>ICanUpdate</code>, and <code>ICanGetById</code> as four separate single-method interfaces, step back and ask whether any client actually uses <code>ICanAdd</code> without also using <code>ICanUpdate</code>. If the answer is no, merge them.</p>
<h3 id="pitfall-2-splitting-by-implementation-detail-instead-of-client-need">Pitfall 2: Splitting by implementation detail instead of client need</h3>
<p>Interfaces should be designed from the perspective of the client, not the implementation. Do not split an interface because the implementing class has two private fields. Split it because two clients need different subsets of the public contract. The implementation is free to use whatever internal structure it wants.</p>
<p>A bad split:</p>
<pre><code class="language-csharp">// Split based on which database table the methods hit — an implementation detail
public interface IUserTableQueries { /* queries on User table */ }
public interface IAuditLogTableQueries { /* queries on AuditLog table */ }
</code></pre>
<p>A good split:</p>
<pre><code class="language-csharp">// Split based on what consumers need
public interface IUserReader { /* methods for reading user data */ }
public interface IAuditTrail { /* methods for recording and querying audit events */ }
</code></pre>
<h3 id="pitfall-3-breaking-changes-during-refactoring">Pitfall 3: Breaking changes during refactoring</h3>
<p>When you refactor a fat interface into multiple smaller ones, you are making a breaking change. Every consumer of the original interface must be updated to depend on one of the new interfaces. In a small codebase this is trivial. In a large codebase with hundreds of consumers, it can be daunting.</p>
<p>The pragmatic approach: keep the original fat interface as a composition of the new smaller ones, at least temporarily.</p>
<pre><code class="language-csharp">// Old interface — now composed of smaller ones
public interface IUserRepository : IUserReader, IUserWriter, IUserBulkOperations, IUserReporting
{
    // No new members — just aggregates the smaller interfaces
}
</code></pre>
<p>Existing code continues to compile. New code can depend on the smaller interfaces. Over time, you can migrate consumers one by one and eventually deprecate the fat composite interface.</p>
<h3 id="pitfall-4-ignoring-isp-in-test-doubles">Pitfall 4: Ignoring ISP in test doubles</h3>
<p>If your test doubles (mocks, stubs, fakes) implement the full fat interface, you are masking the ISP violation. The tests work, but they quietly accept the coupling. When you move to well-segregated interfaces, your test doubles become simpler and your tests become more focused. A test for the dashboard should only need a mock of <code>IUserReporting</code>, not a mock of the entire repository.</p>
<h3 id="pitfall-5-applying-isp-to-value-objects-and-dtos">Pitfall 5: Applying ISP to value objects and DTOs</h3>
<p>ISP is about behavioral contracts — methods and their dependencies. It does not apply to data transfer objects, records, or value objects in the same way. A <code>UserDto</code> with fifteen properties is not an ISP violation. It is a data container. The ISP applies to the interfaces through which behavior is exposed, not to the shape of data structures. (You might have other concerns about a DTO with fifteen properties — perhaps it is doing too much — but that is SRP, not ISP.)</p>
<h2 id="part-10-isp-in-blazor-webassembly">Part 10: ISP in Blazor WebAssembly</h2>
<p>For those of us building Blazor WebAssembly applications — like this very blog you are reading on Observer Magazine — ISP has practical implications for how we structure our services.</p>
<h3 id="service-interfaces-for-blazor-components">Service interfaces for Blazor components</h3>
<p>In a Blazor WASM app, components inject services to fetch data, manage state, and interact with APIs. A common mistake is to create a single <code>IApiService</code> that every component depends on:</p>
<pre><code class="language-csharp">// Fat interface — every component depends on everything
public interface IApiService
{
    Task&lt;IReadOnlyList&lt;BlogPost&gt;&gt; GetBlogPostsAsync();
    Task&lt;BlogPost?&gt; GetBlogPostAsync(string slug);
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetProductsAsync();
    Task&lt;Product?&gt; GetProductAsync(int id);
    Task SaveProductAsync(Product product);
    Task DeleteProductAsync(int id);
    Task&lt;UserProfile&gt; GetCurrentUserAsync();
    Task UpdateUserProfileAsync(UserProfile profile);
    Task&lt;WeatherForecast[]&gt; GetForecastAsync();
}
</code></pre>
<p>The blog components only need blog methods. The product showcase only needs product methods. The user profile page only needs user methods. Every component is coupled to every other component's data-fetching needs.</p>
<p>A well-segregated design:</p>
<pre><code class="language-csharp">public interface IBlogService
{
    Task&lt;IReadOnlyList&lt;BlogPostMetadata&gt;&gt; GetPostsAsync();
    Task&lt;BlogPostMetadata?&gt; GetPostAsync(string slug);
    Task&lt;string&gt; GetPostHtmlAsync(string slug);
}

public interface IProductCatalog
{
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetProductsAsync();
    Task&lt;Product?&gt; GetProductAsync(int id);
}

public interface IProductEditor
{
    Task SaveProductAsync(Product product);
    Task DeleteProductAsync(int id);
}

public interface IUserProfileService
{
    Task&lt;UserProfile&gt; GetCurrentUserAsync();
    Task UpdateUserProfileAsync(UserProfile profile);
}
</code></pre>
<p>Each Blazor component injects only the interface it needs. The blog page depends on <code>IBlogService</code>. The product detail page depends on <code>IProductCatalog</code>. The admin editor depends on <code>IProductEditor</code>. When you change the blog data format, the product components are completely unaffected.</p>
<h3 id="testability-benefits-in-blazor">Testability benefits in Blazor</h3>
<p>This segregation pays enormous dividends in bUnit tests. Consider testing a blog post component:</p>
<pre><code class="language-csharp">[Fact]
public void BlogPost_RendersTitle()
{
    // With segregated interfaces, the mock is minimal
    var mockBlog = new Mock&lt;IBlogService&gt;();
    mockBlog.Setup(b =&gt; b.GetPostAsync(&quot;test-slug&quot;))
        .ReturnsAsync(new BlogPostMetadata { Title = &quot;Test Post&quot;, Slug = &quot;test-slug&quot; });
    mockBlog.Setup(b =&gt; b.GetPostHtmlAsync(&quot;test-slug&quot;))
        .ReturnsAsync(&quot;&lt;p&gt;Hello&lt;/p&gt;&quot;);

    using var ctx = new BunitContext();
    ctx.Services.AddSingleton(mockBlog.Object);

    var cut = ctx.Render&lt;BlogPost&gt;(parameters =&gt;
        parameters.Add(p =&gt; p.Slug, &quot;test-slug&quot;));

    cut.Find(&quot;h1&quot;).TextContent.ShouldBe(&quot;Test Post&quot;);
}
</code></pre>
<p>No need to mock product methods, user methods, or weather methods. The test sets up exactly the interface the component uses. This makes tests faster to write, easier to read, and more resistant to changes in unrelated parts of the system.</p>
<h2 id="part-11-practical-heuristics-when-to-split-and-when-to-stop">Part 11: Practical Heuristics — When to Split and When to Stop</h2>
<p>After all this theory and examples, here are concrete heuristics you can apply in your daily work.</p>
<h3 id="split-when">Split when</h3>
<ol>
<li><strong>Two or more clients use different subsets</strong> of the same interface. This is the canonical ISP trigger.</li>
<li><strong>You find yourself writing <code>NotImplementedException</code></strong> in an implementation. The interface is asking for something this class cannot do.</li>
<li><strong>Your mocks are bloated.</strong> If setting up a mock requires configuring methods the test never exercises, the interface is too fat for this consumer.</li>
<li><strong>A change to one method ripples to unrelated consumers.</strong> If adding a reporting method forces you to update an authentication handler, the coupling is wrong.</li>
<li><strong>You are splitting a monolith into microservices.</strong> Each service should expose a focused API, not a mirror of the monolith's fat interface.</li>
</ol>
<h3 id="do-not-split-when">Do not split when</h3>
<ol>
<li><strong>Every client uses every method.</strong> If there is no divergence in how clients consume the interface, splitting adds complexity without benefit.</li>
<li><strong>The interface has fewer than five methods and they are all cohesive.</strong> An <code>ILogger</code> with five log-level methods is fine.</li>
<li><strong>The split would create single-method interfaces that are always used together.</strong> If <code>ICanRead</code> and <code>ICanCount</code> are always injected together, merge them into <code>IReadOnlyCollection</code> (which is exactly what Microsoft did).</li>
<li><strong>You are working on a throwaway prototype.</strong> ISP is an investment in long-term maintainability. If the code will be deleted next sprint, the investment does not pay off.</li>
<li><strong>The interface is a well-known framework type.</strong> Do not wrap <code>ILogger&lt;T&gt;</code> in your own <code>IMyLogger</code> just to remove methods you do not call. The framework type is well-understood, widely documented, and carries minimal ISP risk because its methods are highly cohesive.</li>
</ol>
<h3 id="the-one-more-method-test">The &quot;one more method&quot; test</h3>
<p>When someone asks to add a method to an existing interface, ask yourself: &quot;Will every existing client of this interface benefit from or be unaffected by this addition?&quot; If the answer is yes, add the method. If the answer is &quot;no, this is only for the new admin panel,&quot; create a new interface for the admin panel's needs. This single question, asked consistently, prevents most ISP violations from ever forming.</p>
<h2 id="part-12-a-real-world-example-from-this-project">Part 12: A Real-World Example from This Project</h2>
<p>Observer Magazine itself — the Blazor WebAssembly application you are reading right now — applies ISP throughout its service layer. Here is a concrete example.</p>
<p>The application has an analytics service for tracking page views and reactions. The original design might have been a single <code>IAnalyticsService</code>:</p>
<pre><code class="language-csharp">public interface IAnalyticsService
{
    Task TrackPageViewAsync(string pageName, string details = &quot;&quot;);
    Task IncrementViewAsync(string slug);
    Task&lt;int?&gt; GetViewCountAsync(string slug);
    Task AddReactionAsync(string slug, string reaction);
    Task&lt;Dictionary&lt;string, int&gt;?&gt; GetReactionsAsync(string slug);
}
</code></pre>
<p>But consider the consumers. The <code>Blog.razor</code> page only calls <code>TrackPageViewAsync</code> to record that someone visited the blog index. The <code>BlogPost.razor</code> page calls <code>IncrementViewAsync</code>, <code>GetViewCountAsync</code>, and <code>GetReactionsAsync</code>. The <code>Reactions.razor</code> component calls <code>AddReactionAsync</code> and <code>GetReactionsAsync</code>.</p>
<p>Different components use different subsets. In a fully ISP-compliant design, these would be separate interfaces. In practice, for a project this size, the trade-off is debatable — the interface is small, the team is small, and the cost of the coupling is low. But if the analytics service grows to include A/B testing, funnel tracking, and conversion metrics, the pressure to split will increase. Knowing where to draw the line is as important as knowing the principle.</p>
<h2 id="part-13-isp-in-the-age-of-source-generators-and-aot">Part 13: ISP in the Age of Source Generators and AOT</h2>
<p>Modern .NET 10 introduces patterns that interact with ISP in interesting ways.</p>
<h3 id="source-generators-and-minimal-interfaces">Source generators and minimal interfaces</h3>
<p>Source generators in .NET can produce boilerplate code from interfaces. The <code>System.Text.Json</code> source generator, for example, reads your serialization attributes and generates optimized serializer code at compile time. For this to work well, the interfaces your generators consume should be focused and stable. A fat interface that changes frequently will trigger frequent regeneration and recompilation — echoing the original Xerox build-time problem.</p>
<h3 id="native-aot-and-interface-dispatch">Native AOT and interface dispatch</h3>
<p>Native Ahead-of-Time compilation eliminates the JIT compiler and produces native binaries. One consequence: the AOT compiler must statically analyze all possible interface implementations at compile time. Fat interfaces with many implementations can increase the size of the dispatch tables the compiler generates. Well-segregated interfaces with fewer implementations per interface produce leaner binaries. This is a marginal concern for most applications, but it becomes relevant at the edges — embedded systems, serverless functions with tight cold-start budgets, and mobile applications where binary size matters.</p>
<h3 id="keyed-services-in.net-8">Keyed services in .NET 8+</h3>
<p>.NET 8 introduced keyed services in the DI container, allowing you to register multiple implementations of the same interface distinguished by a key:</p>
<pre><code class="language-csharp">builder.Services.AddKeyedScoped&lt;IUserReader, CachedUserReader&gt;(&quot;cached&quot;);
builder.Services.AddKeyedScoped&lt;IUserReader, SqlUserReader&gt;(&quot;sql&quot;);
</code></pre>
<p>This interacts with ISP by making it easier to have multiple implementations of the same focused interface for different contexts (cached for the web layer, direct SQL for the admin layer). Without segregated interfaces, keyed services become harder to use because the keys would need to distinguish not just the implementation but also the subset of the interface the consumer needs.</p>
<h2 id="part-14-summary-and-takeaways">Part 14: Summary and Takeaways</h2>
<p>The Interface Segregation Principle is one of the most practical of the SOLID principles. It directly addresses a problem that every growing codebase eventually faces: interfaces that started simple and grew fat as requirements accumulated. The principle is not about counting methods or enforcing a maximum interface size. It is about ensuring that each consumer of an interface depends only on the capabilities it actually uses.</p>
<p>The key ideas to carry with you:</p>
<p><strong>Design interfaces from the client's perspective.</strong> Ask &quot;what does this consumer need?&quot; not &quot;what can this class do?&quot; The answers to those two questions should produce different interfaces.</p>
<p><strong>The .NET BCL is your teacher.</strong> Study the progression from <code>IEnumerable&lt;T&gt;</code> to <code>IReadOnlyCollection&lt;T&gt;</code> to <code>IReadOnlyList&lt;T&gt;</code> to <code>ICollection&lt;T&gt;</code> to <code>IList&lt;T&gt;</code>. Each step adds a narrow slice of capability. This is ISP done well.</p>
<p><strong>Composition over proliferation.</strong> When you split interfaces, compose them back together for clients that need the full surface area. <code>IUserRepository : IUserReader, IUserWriter</code> is idiomatic C#.</p>
<p><strong>The principle is fractal.</strong> ISP applies at the class level (C# interfaces), the service level (REST APIs, gRPC services), the system level (microservice boundaries), and the event level (message contracts). The same question — &quot;is this consumer forced to depend on things it does not use?&quot; — applies everywhere.</p>
<p><strong>Know when to stop.</strong> Not every interface needs splitting. Not every three-method interface hides an ISP violation. Apply the principle when you see the symptoms: bloated mocks, unrelated recompilations, <code>NotImplementedException</code>, and clients that use three out of twelve methods.</p>
<h2 id="resources">Resources</h2>
<p>Here are the key resources for further study:</p>
<ul>
<li>Robert C. Martin, <em>Agile Software Development: Principles, Patterns, and Practices</em> (Prentice Hall, 2002) — the original book-length treatment of all five SOLID principles, including the ISP chapter with the Xerox story and ATM transaction example.</li>
<li>Robert C. Martin, &quot;The Interface Segregation Principle&quot; — the original article available at <a href="https://web.archive.org/web/20150924054349/http://www.objectmentor.com/resources/articles/isp.pdf">https://web.archive.org/web/20150924054349/http://www.objectmentor.com/resources/articles/isp.pdf</a></li>
<li>Microsoft, &quot;Guidelines for Collections&quot; — <a href="https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/guidelines-for-collections">https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/guidelines-for-collections</a></li>
<li>NDepend Blog, &quot;SOLID Design in C#: The Interface Segregation Principle (ISP) with Examples&quot; — <a href="https://blog.ndepend.com/solid-design-the-interface-segregation-principle-isp/">https://blog.ndepend.com/solid-design-the-interface-segregation-principle-isp/</a></li>
<li>DevIQ, &quot;Interface Segregation Principle&quot; — <a href="https://deviq.com/principles/interface-segregation/">https://deviq.com/principles/interface-segregation/</a></li>
<li>Scott Hannen, &quot;The Interface Segregation Principle Applied in C#/.NET&quot; — <a href="https://scotthannen.org/blog/2019/01/01/interface-segregation-principle-applied.html">https://scotthannen.org/blog/2019/01/01/interface-segregation-principle-applied.html</a></li>
<li>Vladimir Khorikov (Enterprise Craftsmanship), &quot;IEnumerable vs IReadOnlyList&quot; — <a href="https://enterprisecraftsmanship.com/posts/ienumerable-vs-ireadonlylist/">https://enterprisecraftsmanship.com/posts/ienumerable-vs-ireadonlylist/</a></li>
</ul>
]]></content:encoded>
      <category>solid</category>
      <category>csharp</category>
      <category>design-principles</category>
      <category>dotnet</category>
      <category>architecture</category>
      <category>deep-dive</category>
      <category>best-practices</category>
    </item>
    <item>
      <title>The Liskov Substitution Principle: A Complete Guide for .NET Developers</title>
      <link>https://observermagazine.github.io/blog/liskov-substitution</link>
      <description>A deep dive into the Liskov Substitution Principle — from Barbara Liskov's 1987 keynote to practical C# code, real-world violations, design-by-contract rules, and strategies for writing substitutable types in modern .NET.</description>
      <pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/liskov-substitution</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<p>Picture this: it is a quiet Wednesday afternoon. You are working on a payment processing system. The team lead merged a pull request last week that introduced a new <code>ExpressPayment</code> class inheriting from <code>Payment</code>. Everything compiled. The unit tests passed. The code review looked clean. And now, three days later, production is throwing <code>NotSupportedException</code> in a code path that has worked flawlessly for two years. The new subclass broke a contract that the base class had promised. The caller never expected it. The monitoring dashboard is red. Your on-call phone is buzzing.</p>
<p>You have just been bitten by a violation of the Liskov Substitution Principle.</p>
<p>The Liskov Substitution Principle — the &quot;L&quot; in SOLID — is arguably the most misunderstood and the most consequential of the five principles. It is the principle that separates an inheritance hierarchy that <em>works</em> from one that is a ticking time bomb. It is the principle that explains why a <code>Square</code> is not a <code>Rectangle</code>, why a <code>ReadOnlyCollection</code> should not inherit from <code>List&lt;T&gt;</code>, and why your carefully designed plugin architecture falls apart every time someone writes a new adapter.</p>
<p>This article is going to take you through the entire story — from the academic origins at OOPSLA 1987 to the practical rules you should apply in your C# code today. We will examine real violations, write real fixes, explore the relationship between LSP and Design by Contract, and end with a checklist you can pin to your wall.</p>
<p>Let us begin.</p>
<h2 id="part-1-origins-barbara-liskov-and-the-birth-of-a-principle">Part 1: Origins — Barbara Liskov and the Birth of a Principle</h2>
<p>To understand the Liskov Substitution Principle, you need to understand the person behind it.</p>
<p>Barbara Liskov was born in 1939 in Los Angeles. She earned her bachelor's degree in mathematics from UC Berkeley in 1961, then worked at the Mitre Corporation before returning to academia. In 1968, she became one of the first women in the United States to earn a PhD in computer science, from Stanford, under the supervision of John McCarthy — the father of artificial intelligence. Her thesis was on chess endgame programs, and during that work she developed the killer heuristic, a technique still used in game tree search algorithms.</p>
<p>After Stanford, Liskov joined MIT in 1972, where she led the design and implementation of the CLU programming language. CLU was groundbreaking. It introduced concepts that are foundational to every language you use today: data abstraction, encapsulation, iterators, parametric polymorphism, and exception handling. If you have ever written a <code>foreach</code> loop, you owe a debt to CLU. If you have ever defined an interface, you are working in an intellectual tradition that traces back to Liskov's research group at MIT in the 1970s.</p>
<p>In 1987, Liskov delivered a keynote address at OOPSLA (the Object-Oriented Programming, Systems, Languages, and Applications conference) titled <em>Data Abstraction and Hierarchy</em>. In that talk, she presented an informal rule about when one type can safely stand in for another:</p>
<blockquote>
<p>What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2, then S is a subtype of T.</p>
</blockquote>
<p>This is the original formulation. It is deliberately informal — Liskov herself later called it an &quot;informal rule.&quot; The key insight is deceptively simple: if your code works with a base type, it should continue to work when you hand it a derived type. No surprises. No exceptions. No &quot;well, except when...&quot;</p>
<p>Seven years later, in 1994, Liskov and Jeannette Wing published a rigorous formalization in their paper <em>A Behavioral Notion of Subtyping</em> in ACM Transactions on Programming Languages and Systems. This paper introduced the history constraint (sometimes called the &quot;history rule&quot;), which addresses what happens when a subtype adds new methods that can mutate state in ways the supertype never allowed. This was the key innovation beyond Bertrand Meyer's earlier Design by Contract work.</p>
<p>In 2000, Robert C. Martin published his paper <em>Design Principles and Design Patterns</em>, which collected five object-oriented design principles. Around 2004, Michael Feathers coined the SOLID acronym to make them memorable. The &quot;L&quot; stands for Liskov Substitution.</p>
<p>In 2008, Barbara Liskov received the Turing Award — the highest honor in computer science — for her contributions to programming language and system design, especially related to data abstraction, fault tolerance, and distributed computing.</p>
<h3 id="why-this-history-matters">Why This History Matters</h3>
<p>You might be wondering why we are spending time on history in a programming article. Here is why: the Liskov Substitution Principle is not a style preference. It is not a &quot;clean code&quot; guideline that you can take or leave. It is a mathematically grounded property of type systems. When you violate it, you break the fundamental contract that makes polymorphism work. Understanding that it comes from the same intellectual tradition as data abstraction, formal verification, and type theory helps you take it seriously — and helps you understand <em>why</em> certain designs fail.</p>
<h2 id="part-2-the-principle-in-plain-language">Part 2: The Principle in Plain Language</h2>
<p>Let us strip away the formal notation and state the principle as simply as possible.</p>
<p><strong>If you have code that works correctly with a base type, it must also work correctly with any subtype of that base type, without the calling code needing to know or care which subtype it received.</strong></p>
<p>That is the entire principle. Everything else — preconditions, postconditions, invariants, the history rule — is a consequence of this one requirement.</p>
<p>Think of it like a vending machine. The machine's contract says: &quot;Insert a coin, press a button, receive a drink.&quot; If you insert a US quarter, it works. If you insert a Canadian quarter (same size, same shape), it should also work — because the machine's contract is defined in terms of &quot;a coin of this size and weight,&quot; not &quot;a US quarter specifically.&quot; But if you insert a wooden token that is the same size but does not conduct electricity for the coin sensor, the machine jams. The wooden token <em>looks</em> like a valid substitution from the outside, but it violates the behavioral contract.</p>
<p>LSP is about behavioral compatibility, not just structural compatibility. A type can implement all the same methods, have all the same properties, and still violate LSP if its <em>behavior</em> breaks the expectations of code written against the base type.</p>
<h3 id="the-three-levels-of-substitutability">The Three Levels of Substitutability</h3>
<p>It helps to think about substitutability at three increasingly strict levels:</p>
<p><strong>Level 1: Syntactic substitutability.</strong> The subtype compiles wherever the base type is expected. In C#, this is enforced by the compiler. If <code>Dog</code> inherits from <code>Animal</code>, you can pass a <code>Dog</code> to any method that accepts an <code>Animal</code>. This is necessary but not sufficient for LSP.</p>
<p><strong>Level 2: Semantic substitutability.</strong> The subtype behaves correctly wherever the base type is expected. Methods return meaningful results, state transitions are valid, and no unexpected exceptions are thrown. This is what LSP demands.</p>
<p><strong>Level 3: Behavioral equivalence.</strong> The subtype behaves <em>identically</em> to the base type. This is actually too strong — LSP does not require identical behavior. A <code>SortedList&lt;T&gt;</code> does not behave identically to <code>List&lt;T&gt;</code> (it maintains sorted order), but it can still be a valid behavioral subtype if the base type's contract does not specify insertion order.</p>
<p>The sweet spot — and the requirement of LSP — is Level 2. Subtypes must honor the contracts of their base types while being free to extend them in compatible ways.</p>
<h2 id="part-3-the-formal-rules-contracts-preconditions-and-the-history-constraint">Part 3: The Formal Rules — Contracts, Preconditions, and the History Constraint</h2>
<p>The Liskov Substitution Principle can be decomposed into a set of concrete rules. These rules are drawn from Liskov and Wing's 1994 paper and from Bertrand Meyer's Design by Contract methodology. Understanding each one will let you mechanically check whether a given inheritance relationship is valid.</p>
<h3 id="rule-1-contravariance-of-preconditions">Rule 1: Contravariance of Preconditions</h3>
<p><strong>A subtype must not strengthen preconditions.</strong></p>
<p>A precondition is a condition that must be true before a method can be called. If the base class method accepts any positive integer, the subtype method must also accept any positive integer. It may accept <em>more</em> (like zero or negative integers), but it must not accept <em>less</em>.</p>
<p>Here is a violation in C#:</p>
<pre><code class="language-csharp">public class BaseProcessor
{
    public virtual void Process(int value)
    {
        // Accepts any integer
        Console.WriteLine($&quot;Processing {value}&quot;);
    }
}

public class StrictProcessor : BaseProcessor
{
    public override void Process(int value)
    {
        // VIOLATION: Strengthened precondition
        if (value &lt; 0)
            throw new ArgumentOutOfRangeException(
                nameof(value), &quot;Value must be non-negative&quot;);

        Console.WriteLine($&quot;Strictly processing {value}&quot;);
    }
}
</code></pre>
<p>Code written against <code>BaseProcessor</code> legitimately passes <code>-5</code> and expects it to work. <code>StrictProcessor</code> blows up. That is an LSP violation.</p>
<p>The fix is to either relax the precondition or restructure the hierarchy so that <code>StrictProcessor</code> does not inherit from <code>BaseProcessor</code>:</p>
<pre><code class="language-csharp">public interface IProcessor
{
    void Process(int value);
}

public class GeneralProcessor : IProcessor
{
    public void Process(int value)
    {
        Console.WriteLine($&quot;Processing {value}&quot;);
    }
}

public class NonNegativeProcessor : IProcessor
{
    // The interface contract now explicitly documents
    // what each implementation accepts
    public void Process(int value)
    {
        if (value &lt; 0)
            throw new ArgumentOutOfRangeException(
                nameof(value), &quot;Value must be non-negative&quot;);

        Console.WriteLine($&quot;Strictly processing {value}&quot;);
    }
}
</code></pre>
<p>Now neither class claims to substitute for the other. They both implement a shared interface, and the caller chooses based on their needs.</p>
<h3 id="rule-2-covariance-of-postconditions">Rule 2: Covariance of Postconditions</h3>
<p><strong>A subtype must not weaken postconditions.</strong></p>
<p>A postcondition is a guarantee about what is true after a method returns. If the base class method guarantees that the return value is non-null, the subtype must also return non-null. The subtype may strengthen the postcondition (e.g., guarantee the return value is also non-empty), but it must not weaken it.</p>
<pre><code class="language-csharp">public class DataFetcher
{
    public virtual IReadOnlyList&lt;string&gt; FetchRecords()
    {
        // Postcondition: always returns a non-null list
        return new List&lt;string&gt; { &quot;default&quot; };
    }
}

public class LazyDataFetcher : DataFetcher
{
    public override IReadOnlyList&lt;string&gt;? FetchRecords()
    {
        // VIOLATION: Can return null, weakening the postcondition
        // (In practice, C# nullable reference types would catch this,
        // but the principle applies regardless of language features)
        return null;
    }
}
</code></pre>
<p>Any caller that trusts the base class contract and writes <code>var count = fetcher.FetchRecords().Count;</code> will get a <code>NullReferenceException</code>. The postcondition was weakened.</p>
<h3 id="rule-3-invariant-preservation">Rule 3: Invariant Preservation</h3>
<p><strong>A subtype must preserve all invariants of the base type.</strong></p>
<p>An invariant is a condition that is always true for an object throughout its lifetime. If the base class guarantees that <code>Balance &gt;= 0</code> at all times, every subtype must also maintain <code>Balance &gt;= 0</code> at all times.</p>
<pre><code class="language-csharp">public class BankAccount
{
    public decimal Balance { get; protected set; }

    public BankAccount(decimal initialBalance)
    {
        if (initialBalance &lt; 0)
            throw new ArgumentException(&quot;Initial balance must be non-negative&quot;);
        Balance = initialBalance;
    }

    // Invariant: Balance &gt;= 0
    public virtual void Withdraw(decimal amount)
    {
        if (amount &gt; Balance)
            throw new InvalidOperationException(&quot;Insufficient funds&quot;);
        Balance -= amount;
    }
}

public class OverdraftAccount : BankAccount
{
    public decimal OverdraftLimit { get; }

    public OverdraftAccount(decimal initialBalance, decimal overdraftLimit)
        : base(initialBalance)
    {
        OverdraftLimit = overdraftLimit;
    }

    public override void Withdraw(decimal amount)
    {
        // VIOLATION: Allows Balance to go negative,
        // breaking the base class invariant
        if (amount &gt; Balance + OverdraftLimit)
            throw new InvalidOperationException(&quot;Exceeds overdraft limit&quot;);
        Balance -= amount;
    }
}
</code></pre>
<p>Code that depends on the <code>BankAccount</code> invariant (<code>Balance &gt;= 0</code>) will produce incorrect results when handed an <code>OverdraftAccount</code>. For example, a report that calculates &quot;accounts with zero balance&quot; by checking <code>account.Balance == 0</code> will miss overdrafted accounts entirely.</p>
<p>The fix depends on your domain. One approach: do not make <code>OverdraftAccount</code> inherit from <code>BankAccount</code>. Instead, define a more general <code>IAccount</code> interface whose contract does not promise non-negative balances, and let each implementation document its own invariants.</p>
<pre><code class="language-csharp">public interface IAccount
{
    decimal Balance { get; }
    void Withdraw(decimal amount);
    // Contract: Withdraw throws if amount exceeds
    // the account's available funds (definition varies by type)
}

public class StandardAccount : IAccount
{
    public decimal Balance { get; private set; }

    public StandardAccount(decimal initialBalance)
    {
        if (initialBalance &lt; 0)
            throw new ArgumentException(&quot;Must be non-negative&quot;);
        Balance = initialBalance;
    }

    public void Withdraw(decimal amount)
    {
        if (amount &gt; Balance)
            throw new InvalidOperationException(&quot;Insufficient funds&quot;);
        Balance -= amount;
    }
}

public class OverdraftAccount : IAccount
{
    public decimal Balance { get; private set; }
    public decimal OverdraftLimit { get; }

    public OverdraftAccount(decimal initialBalance, decimal overdraftLimit)
    {
        Balance = initialBalance;
        OverdraftLimit = overdraftLimit;
    }

    public void Withdraw(decimal amount)
    {
        if (amount &gt; Balance + OverdraftLimit)
            throw new InvalidOperationException(&quot;Exceeds overdraft limit&quot;);
        Balance -= amount;
    }
}
</code></pre>
<h3 id="rule-4-the-history-constraint">Rule 4: The History Constraint</h3>
<p><strong>A subtype must not allow state changes that the base type's contract forbids.</strong></p>
<p>This is the rule that Liskov and Wing added in their 1994 paper, and it is the one most developers have never heard of. It says: if the base type is immutable, a subtype must also be immutable (at least from the perspective of the base type's interface). If the base type's specification says a property can only increase, the subtype must not allow it to decrease.</p>
<p>The classic example: an immutable point and a mutable point.</p>
<pre><code class="language-csharp">public class ImmutablePoint
{
    public int X { get; }
    public int Y { get; }

    public ImmutablePoint(int x, int y)
    {
        X = x;
        Y = y;
    }
}

public class MutablePoint : ImmutablePoint
{
    // VIOLATION: Adds mutation capability that contradicts
    // the base class's immutability contract
    public new int X { get; set; }
    public new int Y { get; set; }

    public MutablePoint(int x, int y) : base(x, y)
    {
        X = x;
        Y = y;
    }

    public void MoveTo(int newX, int newY)
    {
        X = newX;
        Y = newY;
    }
}
</code></pre>
<p>Code that stores an <code>ImmutablePoint</code> in a dictionary as a key (relying on the fact that <code>X</code> and <code>Y</code> will never change, and therefore the hash code is stable) will corrupt the dictionary if a <code>MutablePoint</code> sneaks in and then gets mutated. The history constraint says this inheritance relationship is invalid because the subtype introduces state transitions that the base type's history forbids.</p>
<h3 id="rule-5-exception-compatibility">Rule 5: Exception Compatibility</h3>
<p><strong>A subtype must not throw new exceptions that the base type's contract does not permit.</strong></p>
<p>If the base class method is documented to throw <code>ArgumentException</code> on invalid input and <code>IOException</code> on I/O failure, a subtype should not introduce <code>SecurityException</code> or <code>NotImplementedException</code>. The calling code is prepared to handle certain exceptions; introducing new ones breaks the contract.</p>
<pre><code class="language-csharp">public abstract class FileStore
{
    /// &lt;summary&gt;
    /// Saves data to the store.
    /// Throws IOException if the write fails.
    /// Throws ArgumentNullException if data is null.
    /// &lt;/summary&gt;
    public abstract void Save(byte[] data);
}

public class EncryptedFileStore : FileStore
{
    public override void Save(byte[] data)
    {
        ArgumentNullException.ThrowIfNull(data);

        // VIOLATION: Throws an exception type the base
        // class contract never mentioned
        throw new CryptographicException(
            &quot;Encryption key not configured&quot;);
    }
}
</code></pre>
<p>The fix: either make <code>CryptographicException</code> inherit from <code>IOException</code> (not ideal), or document the base class contract to allow for more general exceptions, or handle the encryption setup in the constructor so <code>Save</code> never encounters this state.</p>
<h3 id="signature-rules">Signature Rules</h3>
<p>In addition to the behavioral rules above, LSP also implies structural rules at the type level. C# enforces most of these automatically:</p>
<p><strong>Contravariance of method parameter types in the subtype.</strong> If the base method accepts <code>Animal</code>, the override should accept <code>Animal</code> or a more general type. C# method overriding requires exact parameter type matches, so this is enforced by the compiler.</p>
<p><strong>Covariance of method return types in the subtype.</strong> If the base method returns <code>Animal</code>, the override may return <code>Dog</code> (a more specific type). C# supports covariant return types starting with C# 9 and .NET 5.</p>
<pre><code class="language-csharp">public class AnimalShelter
{
    public virtual Animal GetAnimal() =&gt; new Animal();
}

public class DogShelter : AnimalShelter
{
    // Covariant return type — valid in C# 9+
    public override Dog GetAnimal() =&gt; new Dog();
}
</code></pre>
<h2 id="part-4-the-classic-violations-and-why-they-are-wrong">Part 4: The Classic Violations — And Why They Are Wrong</h2>
<p>Every article about LSP mentions the rectangle-square problem. We will cover it here because it is genuinely instructive, but we will also go beyond it into violations you are more likely to encounter in production .NET code.</p>
<h3 id="violation-1-the-rectangle-and-the-square">Violation 1: The Rectangle and the Square</h3>
<p>This is the textbook example, and it illustrates the principle perfectly.</p>
<p>In geometry, a square is a rectangle. Every square has four right angles and four sides, and opposite sides are equal. So it seems natural to model this with inheritance:</p>
<pre><code class="language-csharp">public class Rectangle
{
    public virtual int Width { get; set; }
    public virtual int Height { get; set; }

    public int Area =&gt; Width * Height;
}

public class Square : Rectangle
{
    private int _side;

    public override int Width
    {
        get =&gt; _side;
        set
        {
            _side = value;
            // Must keep Width == Height for a square
        }
    }

    public override int Height
    {
        get =&gt; _side;
        set
        {
            _side = value;
        }
    }
}
</code></pre>
<p>Now consider this code, written against <code>Rectangle</code>:</p>
<pre><code class="language-csharp">public void ResizeAndCheck(Rectangle rect)
{
    rect.Width = 5;
    rect.Height = 10;

    // For a rectangle, Area should be 50
    Debug.Assert(rect.Area == 50);
}
</code></pre>
<p>Pass in a <code>Rectangle</code> — the assertion passes. Pass in a <code>Square</code> — the assertion fails, because setting <code>Height = 10</code> also set <code>Width = 10</code>, so the area is 100.</p>
<p>The problem is not with geometry. The problem is that the <code>Rectangle</code> class has an implicit contract: setting <code>Width</code> does not change <code>Height</code>, and vice versa. The <code>Square</code> subclass violates this postcondition.</p>
<p>The fix: do not make <code>Square</code> inherit from <code>Rectangle</code>. Instead, model them as siblings under a common <code>IShape</code> interface:</p>
<pre><code class="language-csharp">public interface IShape
{
    int Area { get; }
}

public class Rectangle : IShape
{
    public int Width { get; set; }
    public int Height { get; set; }
    public int Area =&gt; Width * Height;
}

public class Square : IShape
{
    public int Side { get; set; }
    public int Area =&gt; Side * Side;
}
</code></pre>
<p>Or, if immutability is acceptable, use immutable value types where the issue disappears entirely:</p>
<pre><code class="language-csharp">public readonly record struct Rectangle(int Width, int Height)
{
    public int Area =&gt; Width * Height;
}

public readonly record struct Square(int Side)
{
    public int Area =&gt; Side * Side;
}
</code></pre>
<h3 id="violation-2-the-read-only-collection-that-is-not">Violation 2: The Read-Only Collection That Is Not</h3>
<p>This one shows up constantly in .NET code:</p>
<pre><code class="language-csharp">public class ReadOnlyRepository&lt;T&gt; : List&lt;T&gt;
{
    public ReadOnlyRepository(IEnumerable&lt;T&gt; items) : base(items) { }

    // &quot;Disable&quot; mutation by throwing
    public new void Add(T item) =&gt;
        throw new NotSupportedException(&quot;Collection is read-only&quot;);

    public new void Remove(T item) =&gt;
        throw new NotSupportedException(&quot;Collection is read-only&quot;);

    public new void Clear() =&gt;
        throw new NotSupportedException(&quot;Collection is read-only&quot;);
}
</code></pre>
<p>This class inherits from <code>List&lt;T&gt;</code>, which has a contract that says &quot;you can add, remove, and clear items.&quot; The <code>new</code> keyword hides the base methods but does not override them. If you cast to <code>List&lt;T&gt;</code> or <code>IList&lt;T&gt;</code>, the original <code>Add</code>, <code>Remove</code>, and <code>Clear</code> methods are still callable. Even if you used <code>override</code> (which you cannot, since <code>List&lt;T&gt;</code> methods are not virtual), throwing <code>NotSupportedException</code> weakens the postcondition — callers of <code>List&lt;T&gt;.Add</code> expect the item to be added, not an exception.</p>
<p>The fix: do not inherit from <code>List&lt;T&gt;</code>. Instead, expose <code>IReadOnlyList&lt;T&gt;</code> or <code>IReadOnlyCollection&lt;T&gt;</code>:</p>
<pre><code class="language-csharp">public class ReadOnlyRepository&lt;T&gt;
{
    private readonly List&lt;T&gt; _items;

    public ReadOnlyRepository(IEnumerable&lt;T&gt; items)
    {
        _items = new List&lt;T&gt;(items);
    }

    public IReadOnlyList&lt;T&gt; Items =&gt; _items.AsReadOnly();
}
</code></pre>
<p>Or simply use the built-in <code>ReadOnlyCollection&lt;T&gt;</code>, which wraps a list and throws <code>NotSupportedException</code> from its <code>IList&lt;T&gt;</code> implementation. Wait — does that violate LSP? Yes, technically it does. This is why <code>IReadOnlyList&lt;T&gt;</code> was introduced in .NET 4.5 — to provide a <em>separate</em> interface hierarchy that does not promise mutability. The lesson: prefer <code>IReadOnlyList&lt;T&gt;</code> over <code>IList&lt;T&gt;</code> when your type does not support mutation.</p>
<h3 id="violation-3-the-notimplementedexception-anti-pattern">Violation 3: The NotImplementedException Anti-Pattern</h3>
<p>This is perhaps the single most common LSP violation in real codebases:</p>
<pre><code class="language-csharp">public interface IPaymentGateway
{
    void Charge(decimal amount);
    void Refund(decimal amount);
    PaymentStatus CheckStatus(string transactionId);
}

public class BasicPaymentGateway : IPaymentGateway
{
    public void Charge(decimal amount)
    {
        // Implementation...
    }

    public void Refund(decimal amount)
    {
        // This gateway does not support refunds
        throw new NotImplementedException(
            &quot;Refunds are not supported by this gateway&quot;);
    }

    public PaymentStatus CheckStatus(string transactionId)
    {
        // Implementation...
    }
}
</code></pre>
<p>Any code that processes refunds through <code>IPaymentGateway</code> will explode when it encounters <code>BasicPaymentGateway</code>. The interface says &quot;I can refund.&quot; The implementation says &quot;actually, I can't.&quot;</p>
<p>The fix is interface segregation (the &quot;I&quot; in SOLID works hand-in-hand with the &quot;L&quot;):</p>
<pre><code class="language-csharp">public interface IPaymentGateway
{
    void Charge(decimal amount);
    PaymentStatus CheckStatus(string transactionId);
}

public interface IRefundableGateway : IPaymentGateway
{
    void Refund(decimal amount);
}

public class BasicPaymentGateway : IPaymentGateway
{
    public void Charge(decimal amount) { /* ... */ }
    public PaymentStatus CheckStatus(string transactionId) { /* ... */ }
    // No Refund method — no lie
}

public class FullPaymentGateway : IRefundableGateway
{
    public void Charge(decimal amount) { /* ... */ }
    public void Refund(decimal amount) { /* ... */ }
    public PaymentStatus CheckStatus(string transactionId) { /* ... */ }
}
</code></pre>
<p>Now the type system tells the truth. If you need refund capability, accept <code>IRefundableGateway</code>. If you only need charging, accept <code>IPaymentGateway</code>. No runtime surprises.</p>
<h3 id="violation-4-the-derived-class-that-ignores-parameters">Violation 4: The Derived Class That Ignores Parameters</h3>
<pre><code class="language-csharp">public abstract class Logger
{
    public abstract void Log(string message, LogLevel level);
}

public class ConsoleLogger : Logger
{
    public override void Log(string message, LogLevel level)
    {
        // VIOLATION: Ignores log level entirely,
        // always writes to console
        Console.WriteLine(message);
    }
}
</code></pre>
<p>If the base class contract says &quot;messages at <code>LogLevel.None</code> are suppressed,&quot; and <code>ConsoleLogger</code> writes everything regardless, it violates the postcondition. Callers who set <code>LogLevel.None</code> expecting silence will be surprised.</p>
<h3 id="violation-5-temporal-coupling-in-derived-classes">Violation 5: Temporal Coupling in Derived Classes</h3>
<pre><code class="language-csharp">public abstract class DataPipeline
{
    public abstract void Configure(PipelineOptions options);
    public abstract void Execute();
}

public class BatchPipeline : DataPipeline
{
    private PipelineOptions? _options;

    public override void Configure(PipelineOptions options)
    {
        _options = options;
    }

    public override void Execute()
    {
        // VIOLATION: Throws if Configure was not called first,
        // introducing a precondition the base class didn't require
        if (_options is null)
            throw new InvalidOperationException(
                &quot;Must call Configure before Execute&quot;);

        // Process...
    }
}
</code></pre>
<p>If the base class contract does not require calling <code>Configure</code> before <code>Execute</code>, then <code>BatchPipeline</code> has strengthened the precondition. The fix: either document the requirement on the base class (making it a universal precondition) or eliminate the temporal coupling by requiring configuration in the constructor.</p>
<h2 id="part-5-lsp-in-the.net-framework-and-runtime">Part 5: LSP in the .NET Framework and Runtime</h2>
<p>The .NET ecosystem itself contains both good examples of LSP adherence and some well-known violations. Understanding where the framework gets it right — and where it does not — will sharpen your instincts.</p>
<h3 id="stream-a-mostly-good-hierarchy">Stream: A Mostly-Good Hierarchy</h3>
<p><code>System.IO.Stream</code> is one of the most widely used abstract classes in .NET. Its subclasses include <code>FileStream</code>, <code>MemoryStream</code>, <code>NetworkStream</code>, <code>GZipStream</code>, <code>CryptoStream</code>, <code>SslStream</code>, and many more. The design handles LSP through capability queries:</p>
<pre><code class="language-csharp">public abstract class Stream
{
    public abstract bool CanRead { get; }
    public abstract bool CanWrite { get; }
    public abstract bool CanSeek { get; }

    public abstract int Read(byte[] buffer, int offset, int count);
    public abstract void Write(byte[] buffer, int offset, int count);
    public abstract long Seek(long offset, SeekOrigin origin);
    // ...
}
</code></pre>
<p>A <code>NetworkStream</code> sets <code>CanSeek</code> to <code>false</code> and throws <code>NotSupportedException</code> from <code>Seek</code>. Is that an LSP violation? It depends on how you define the contract. If the contract of <code>Stream.Seek</code> is &quot;seeks to a position in the stream,&quot; then yes, <code>NetworkStream</code> violates it. But the <em>actual</em> contract, as documented, is &quot;seeks to a position in the stream if <code>CanSeek</code> is <code>true</code>; otherwise throws <code>NotSupportedException</code>.&quot; The capability flags are part of the contract.</p>
<p>This is a pragmatic compromise. Ideally, you would have separate <code>IReadableStream</code>, <code>IWritableStream</code>, and <code>ISeekableStream</code> interfaces (and indeed, newer designs sometimes take this approach). But <code>Stream</code> was designed in .NET 1.0 and must maintain backward compatibility. The capability-flag pattern is the next best thing.</p>
<h3 id="icollection-and-ireadonlycollection-a-course-correction">ICollection<T> and IReadOnlyCollection<T>: A Course Correction</h3>
<p>The original <code>ICollection&lt;T&gt;</code> interface (introduced in .NET 2.0) includes <code>Add</code>, <code>Remove</code>, and <code>Clear</code> methods. <code>ReadOnlyCollection&lt;T&gt;</code> implements <code>ICollection&lt;T&gt;</code> and throws <code>NotSupportedException</code> from the mutation methods. This is a well-known LSP weakness in the framework.</p>
<p>.NET 4.5 introduced <code>IReadOnlyCollection&lt;T&gt;</code> and <code>IReadOnlyList&lt;T&gt;</code> as separate interface hierarchies that do not promise mutation. This was an explicit recognition that the original design forced types into LSP violations. Today, the recommendation is:</p>
<ul>
<li>Accept <code>IReadOnlyList&lt;T&gt;</code> or <code>IReadOnlyCollection&lt;T&gt;</code> when you only need to read.</li>
<li>Accept <code>IList&lt;T&gt;</code> or <code>ICollection&lt;T&gt;</code> when you need to mutate.</li>
<li>Return <code>IReadOnlyList&lt;T&gt;</code> from methods that return collections you do not want callers to modify.</li>
</ul>
<h3 id="array-covariance-a-famous-type-hole">Array Covariance: A Famous Type Hole</h3>
<p>C# arrays are covariant, which means you can assign a <code>string[]</code> to an <code>object[]</code> variable:</p>
<pre><code class="language-csharp">object[] objects = new string[3];
objects[0] = &quot;hello&quot;;    // Fine
objects[1] = 42;         // Compiles! But throws ArrayTypeMismatchException at runtime
</code></pre>
<p>This is a genuine LSP violation baked into the language for backward compatibility (inherited from Java's design). An <code>object[]</code> promises &quot;you can put any object in here.&quot; A <code>string[]</code> does not honor that promise. The type system says it is valid; the runtime says otherwise.</p>
<p>This is why generic collections (<code>List&lt;T&gt;</code>) are preferred over arrays for APIs. Generic variance in C# is safe: <code>IEnumerable&lt;out T&gt;</code> is covariant, <code>IComparer&lt;in T&gt;</code> is contravariant, and these are enforced at compile time.</p>
<h2 id="part-6-design-patterns-that-promote-and-violate-lsp">Part 6: Design Patterns That Promote (and Violate) LSP</h2>
<h3 id="patterns-that-help">Patterns That Help</h3>
<p><strong>Strategy Pattern.</strong> The Strategy pattern is a natural fit for LSP. You define an interface, create multiple implementations, and swap them at runtime. As long as each implementation honors the interface contract, LSP is satisfied.</p>
<pre><code class="language-csharp">public interface ISortingStrategy&lt;T&gt;
{
    void Sort(List&lt;T&gt; items, IComparer&lt;T&gt; comparer);
}

public class QuickSortStrategy&lt;T&gt; : ISortingStrategy&lt;T&gt;
{
    public void Sort(List&lt;T&gt; items, IComparer&lt;T&gt; comparer)
    {
        // Quick sort implementation
        items.Sort(comparer); // Delegates to built-in
    }
}

public class BubbleSortStrategy&lt;T&gt; : ISortingStrategy&lt;T&gt;
{
    public void Sort(List&lt;T&gt; items, IComparer&lt;T&gt; comparer)
    {
        // Bubble sort implementation
        for (int i = 0; i &lt; items.Count - 1; i++)
        {
            for (int j = 0; j &lt; items.Count - 1 - i; j++)
            {
                if (comparer.Compare(items[j], items[j + 1]) &gt; 0)
                {
                    (items[j], items[j + 1]) = (items[j + 1], items[j]);
                }
            }
        }
    }
}
</code></pre>
<p>Both strategies sort the list. The result is the same (a sorted list). The performance differs, but the postcondition is identical. LSP is preserved.</p>
<p><strong>Template Method Pattern.</strong> When you define an algorithm's skeleton in a base class and let subclasses override specific steps, LSP is maintained as long as the overridden steps honor their contracts. The base class controls the overall flow; subclasses customize the details.</p>
<pre><code class="language-csharp">public abstract class ReportGenerator
{
    // Template method — not virtual
    public string Generate(ReportData data)
    {
        var header = BuildHeader(data);
        var body = BuildBody(data);
        var footer = BuildFooter(data);
        return $&quot;{header}\n{body}\n{footer}&quot;;
    }

    protected abstract string BuildHeader(ReportData data);
    protected abstract string BuildBody(ReportData data);
    protected virtual string BuildFooter(ReportData data)
        =&gt; $&quot;Generated at {DateTime.UtcNow:u}&quot;;
}
</code></pre>
<p><strong>Decorator Pattern.</strong> Decorators wrap an existing object to add behavior. Because the decorator implements the same interface and delegates to the wrapped object, LSP is naturally preserved:</p>
<pre><code class="language-csharp">public interface IMessageSender
{
    Task SendAsync(string recipient, string body);
}

public class EmailSender : IMessageSender
{
    public async Task SendAsync(string recipient, string body)
    {
        // Send email...
        await Task.CompletedTask;
    }
}

public class LoggingMessageSender : IMessageSender
{
    private readonly IMessageSender _inner;
    private readonly ILogger _logger;

    public LoggingMessageSender(IMessageSender inner, ILogger logger)
    {
        _inner = inner;
        _logger = logger;
    }

    public async Task SendAsync(string recipient, string body)
    {
        _logger.LogInformation(&quot;Sending message to {Recipient}&quot;, recipient);
        await _inner.SendAsync(recipient, body);
        _logger.LogInformation(&quot;Message sent to {Recipient}&quot;, recipient);
    }
}
</code></pre>
<h3 id="patterns-that-risk-violations">Patterns That Risk Violations</h3>
<p><strong>Adapter Pattern (when misused).</strong> Adapters translate one interface to another. If the adapted interface does not fully support the target interface's contract, the adapter will violate LSP. For example, adapting a key-value store (which supports only <code>Get</code> and <code>Put</code>) to a full <code>IDatabase</code> interface (which includes <code>Transaction</code>, <code>Rollback</code>, and <code>Query</code>) will likely produce <code>NotImplementedException</code> stubs.</p>
<p><strong>Null Object Pattern (when lazy).</strong> The Null Object pattern provides a do-nothing implementation to avoid null checks. This is fine when the contract permits no-ops (e.g., a <code>NullLogger</code> that silently discards messages). It is an LSP violation when the contract requires meaningful action (e.g., a <code>NullRepository</code> that claims to save data but does not).</p>
<h2 id="part-7-lsp-and-dependency-injection-in-asp.net-core">Part 7: LSP and Dependency Injection in ASP.NET Core</h2>
<p>Dependency injection (DI) is the standard approach in modern ASP.NET Core applications, and LSP is the principle that makes DI work safely. When you register a service in the DI container:</p>
<pre><code class="language-csharp">builder.Services.AddScoped&lt;IOrderService, OrderService&gt;();
</code></pre>
<p>You are telling the framework: &quot;Wherever someone asks for <code>IOrderService</code>, give them an <code>OrderService</code>.&quot; This is only safe if <code>OrderService</code> is a valid behavioral subtype of <code>IOrderService</code> — i.e., it honors every contract the interface promises.</p>
<h3 id="a-real-world-di-scenario">A Real-World DI Scenario</h3>
<p>Imagine a notification service with multiple implementations:</p>
<pre><code class="language-csharp">public interface INotificationService
{
    /// &lt;summary&gt;
    /// Sends a notification to the specified user.
    /// Returns true if the notification was delivered, false otherwise.
    /// Never throws on delivery failure — returns false instead.
    /// &lt;/summary&gt;
    Task&lt;bool&gt; NotifyAsync(string userId, string message);
}

public class EmailNotificationService : INotificationService
{
    private readonly IEmailClient _emailClient;

    public EmailNotificationService(IEmailClient emailClient)
    {
        _emailClient = emailClient;
    }

    public async Task&lt;bool&gt; NotifyAsync(string userId, string message)
    {
        try
        {
            await _emailClient.SendAsync(userId, &quot;Notification&quot;, message);
            return true;
        }
        catch (Exception)
        {
            return false; // Honors the &quot;never throws&quot; contract
        }
    }
}

public class SmsNotificationService : INotificationService
{
    private readonly ISmsGateway _gateway;

    public SmsNotificationService(ISmsGateway gateway)
    {
        _gateway = gateway;
    }

    public async Task&lt;bool&gt; NotifyAsync(string userId, string message)
    {
        try
        {
            var phone = await LookupPhoneNumber(userId);
            await _gateway.SendSmsAsync(phone, message);
            return true;
        }
        catch (Exception)
        {
            return false; // Honors the &quot;never throws&quot; contract
        }
    }

    private Task&lt;string&gt; LookupPhoneNumber(string userId)
    {
        // Lookup implementation...
        return Task.FromResult(&quot;+1234567890&quot;);
    }
}
</code></pre>
<p>Both implementations honor the contract: they return <code>bool</code>, they never throw on delivery failure. You can swap between them in <code>Program.cs</code> and the rest of the application works unchanged. That is LSP in action.</p>
<p>Now consider a broken implementation:</p>
<pre><code class="language-csharp">public class PushNotificationService : INotificationService
{
    public async Task&lt;bool&gt; NotifyAsync(string userId, string message)
    {
        // VIOLATION: Throws instead of returning false
        var token = await GetPushToken(userId)
            ?? throw new InvalidOperationException(
                $&quot;No push token for user {userId}&quot;);

        await SendPush(token, message);
        return true;
    }

    // ...
}
</code></pre>
<p>This violates the &quot;never throws on delivery failure&quot; postcondition. Any calling code that does not expect an exception from <code>NotifyAsync</code> will fail. Register this in DI, and you have a production bug waiting to happen.</p>
<h3 id="testing-for-lsp-in-di-scenarios">Testing for LSP in DI Scenarios</h3>
<p>A useful testing pattern: write contract tests against the interface and run them for every registered implementation.</p>
<pre><code class="language-csharp">public abstract class NotificationServiceContractTests
{
    protected abstract INotificationService CreateService();

    [Fact]
    public async Task NotifyAsync_WithValidInput_ReturnsBoolean()
    {
        var service = CreateService();
        var result = await service.NotifyAsync(&quot;user-1&quot;, &quot;Hello&quot;);
        Assert.IsType&lt;bool&gt;(result);
    }

    [Fact]
    public async Task NotifyAsync_NeverThrowsOnDeliveryFailure()
    {
        var service = CreateService();

        // This should not throw, even if delivery fails
        var exception = await Record.ExceptionAsync(
            () =&gt; service.NotifyAsync(&quot;nonexistent-user&quot;, &quot;Hello&quot;));

        Assert.Null(exception);
    }

    [Fact]
    public async Task NotifyAsync_WithNullUserId_ThrowsArgumentNullException()
    {
        var service = CreateService();

        await Assert.ThrowsAsync&lt;ArgumentNullException&gt;(
            () =&gt; service.NotifyAsync(null!, &quot;Hello&quot;));
    }
}

public class EmailNotificationServiceTests : NotificationServiceContractTests
{
    protected override INotificationService CreateService()
    {
        var mockClient = new MockEmailClient();
        return new EmailNotificationService(mockClient);
    }
}

public class SmsNotificationServiceTests : NotificationServiceContractTests
{
    protected override INotificationService CreateService()
    {
        var mockGateway = new MockSmsGateway();
        return new SmsNotificationService(mockGateway);
    }
}
</code></pre>
<p>If <code>PushNotificationService</code> fails <code>NotifyAsync_NeverThrowsOnDeliveryFailure</code>, you have caught the LSP violation before it reaches production.</p>
<h2 id="part-8-lsp-and-generics-in-c">Part 8: LSP and Generics in C#</h2>
<p>C# generics interact with LSP in subtle ways, especially around variance.</p>
<h3 id="covariance-out">Covariance (out)</h3>
<p><code>IEnumerable&lt;out T&gt;</code> is covariant. This means <code>IEnumerable&lt;Dog&gt;</code> is substitutable for <code>IEnumerable&lt;Animal&gt;</code> — which is safe because <code>IEnumerable&lt;T&gt;</code> only <em>produces</em> values of type <code>T</code>, it never <em>consumes</em> them. The consumer receives objects that are at least as specific as <code>Animal</code>, so all <code>Animal</code> operations work.</p>
<pre><code class="language-csharp">IEnumerable&lt;Dog&gt; dogs = new List&lt;Dog&gt; { new Dog(&quot;Rex&quot;), new Dog(&quot;Buddy&quot;) };
IEnumerable&lt;Animal&gt; animals = dogs; // Safe — covariance

foreach (Animal animal in animals)
{
    Console.WriteLine(animal.Name); // Works — Dog IS-A Animal
}
</code></pre>
<h3 id="contravariance-in">Contravariance (in)</h3>
<p><code>IComparer&lt;in T&gt;</code> is contravariant. This means <code>IComparer&lt;Animal&gt;</code> is substitutable for <code>IComparer&lt;Dog&gt;</code> — which is safe because a comparer that can compare any two animals can certainly compare two dogs.</p>
<pre><code class="language-csharp">IComparer&lt;Animal&gt; animalComparer = new AnimalByNameComparer();
IComparer&lt;Dog&gt; dogComparer = animalComparer; // Safe — contravariance

var dogs = new List&lt;Dog&gt; { new Dog(&quot;Rex&quot;), new Dog(&quot;Buddy&quot;) };
dogs.Sort(dogComparer); // Works — the comparer can handle Dogs
</code></pre>
<h3 id="invariance-and-the-trouble-with-mutable-collections">Invariance and the Trouble with Mutable Collections</h3>
<p><code>IList&lt;T&gt;</code> is invariant — <code>IList&lt;Dog&gt;</code> is not assignable to <code>IList&lt;Animal&gt;</code>. This is correct! If it were covariant:</p>
<pre><code class="language-csharp">// Hypothetical (does not compile, and for good reason):
IList&lt;Animal&gt; animals = new List&lt;Dog&gt;();
animals.Add(new Cat()); // A Cat in a List&lt;Dog&gt; — disaster!
</code></pre>
<p>Invariance protects LSP. The type system prevents you from creating a situation where a collection promises to accept any <code>Animal</code> but can actually only hold <code>Dog</code> instances.</p>
<h3 id="generic-constraints-and-lsp">Generic Constraints and LSP</h3>
<p>When you write generic constraints, you are defining contracts:</p>
<pre><code class="language-csharp">public class Repository&lt;T&gt; where T : IEntity, new()
{
    public T Create()
    {
        var entity = new T();
        entity.Id = Guid.NewGuid();
        return entity;
    }
}
</code></pre>
<p>The constraint <code>where T : IEntity, new()</code> ensures that any type used with <code>Repository&lt;T&gt;</code> satisfies LSP relative to <code>IEntity</code>: it has an <code>Id</code> property and a parameterless constructor. The generic constraint is a compile-time LSP check.</p>
<h2 id="part-9-lsp-beyond-inheritance-interfaces-records-and-composition">Part 9: LSP Beyond Inheritance — Interfaces, Records, and Composition</h2>
<p>A common misconception: LSP only applies to class inheritance. In fact, LSP applies to any subtyping relationship, including interface implementation, and even to any situation where one component can be substituted for another.</p>
<h3 id="interfaces-and-lsp">Interfaces and LSP</h3>
<p>When a class implements an interface, it enters into an LSP contract. Every implementation of <code>IDisposable.Dispose()</code> must be safe to call multiple times (the documented contract). Every implementation of <code>IEquatable&lt;T&gt;.Equals</code> must be reflexive, symmetric, and transitive. These are behavioral contracts, and violating them is an LSP violation.</p>
<h3 id="records-and-lsp">Records and LSP</h3>
<p>C# records support inheritance:</p>
<pre><code class="language-csharp">public abstract record Shape(string Color);
public record Circle(string Color, double Radius) : Shape(Color);
public record Rectangle(string Color, double Width, double Height) : Shape(Color);
</code></pre>
<p>Records automatically generate <code>Equals</code>, <code>GetHashCode</code>, <code>ToString</code>, and copy constructors. The generated <code>Equals</code> considers all properties, including those introduced in derived records. This is generally LSP-safe because the generated behavior is consistent with the declared properties.</p>
<p>However, be careful with <code>with</code> expressions and polymorphism:</p>
<pre><code class="language-csharp">Shape shape = new Circle(&quot;Red&quot;, 5.0);
Shape modified = shape with { Color = &quot;Blue&quot; };
// modified is a Circle with Color=&quot;Blue&quot; and Radius=5.0
// The runtime type is preserved — LSP is maintained
</code></pre>
<h3 id="composition-over-inheritance-the-lsp-escape-hatch">Composition Over Inheritance: The LSP Escape Hatch</h3>
<p>When you find yourself struggling to make an inheritance hierarchy LSP-compliant, it is often a sign that inheritance is the wrong tool. Composition — building complex objects by combining simpler ones — sidesteps LSP issues entirely because there is no subtyping relationship to violate.</p>
<pre><code class="language-csharp">// Instead of:
public class LoggedRepository : Repository  // Fragile, LSP-risky
{
    // Override every method to add logging...
}

// Prefer:
public class LoggedRepository : IRepository  // No inheritance, no LSP risk
{
    private readonly IRepository _inner;
    private readonly ILogger _logger;

    public LoggedRepository(IRepository inner, ILogger logger)
    {
        _inner = inner;
        _logger = logger;
    }

    public async Task&lt;Entity&gt; GetByIdAsync(Guid id)
    {
        _logger.LogInformation(&quot;Fetching entity {Id}&quot;, id);
        return await _inner.GetByIdAsync(id);
    }

    // Delegate all methods to _inner, adding logging as needed
}
</code></pre>
<p>This is not an argument against inheritance — it is an argument for being deliberate about when to use it. Use inheritance when the &quot;is-a&quot; relationship is genuine and the base class contract is stable. Use composition when you want to add behavior without taking on the obligations of a subtyping contract.</p>
<h2 id="part-10-detecting-lsp-violations">Part 10: Detecting LSP Violations</h2>
<p>How do you find LSP violations in an existing codebase? Here are concrete techniques.</p>
<h3 id="technique-1-search-for-notimplementedexception-and-notsupportedexception">Technique 1: Search for NotImplementedException and NotSupportedException</h3>
<p>Run this in your project:</p>
<pre><code class="language-bash">grep -rn &quot;NotImplementedException\|NotSupportedException&quot; --include=&quot;*.cs&quot; .
</code></pre>
<p>Every hit is a potential LSP violation. Not every one will be — <code>Stream</code> subclasses that throw from <code>Seek</code> when <code>CanSeek</code> is <code>false</code> are contractually valid — but each one deserves scrutiny.</p>
<h3 id="technique-2-search-for-type-checks-in-consumer-code">Technique 2: Search for Type Checks in Consumer Code</h3>
<pre><code class="language-bash">grep -rn &quot;is \|as \|GetType()\|typeof(&quot; --include=&quot;*.cs&quot; .
</code></pre>
<p>Code that checks the runtime type of an object before deciding what to do is often working around an LSP violation:</p>
<pre><code class="language-csharp">// This is a code smell — the caller should not need to know the subtype
public decimal CalculateFee(IAccount account)
{
    if (account is PremiumAccount)
        return 0m;
    if (account is OverdraftAccount overdraft)
        return overdraft.OverdraftFee;
    return 5.00m;
}
</code></pre>
<p>The fix: push the fee calculation into the type hierarchy:</p>
<pre><code class="language-csharp">public interface IAccount
{
    decimal CalculateFee();
}

public class StandardAccount : IAccount
{
    public decimal CalculateFee() =&gt; 5.00m;
}

public class PremiumAccount : IAccount
{
    public decimal CalculateFee() =&gt; 0m;
}

public class OverdraftAccount : IAccount
{
    public decimal OverdraftFee { get; init; }
    public decimal CalculateFee() =&gt; OverdraftFee;
}
</code></pre>
<h3 id="technique-3-contract-tests">Technique 3: Contract Tests</h3>
<p>As shown in Part 7, write abstract test classes that define the expected behavior of an interface, then inherit from them for each implementation. If a new implementation fails a contract test, you have found an LSP violation before it ships.</p>
<h3 id="technique-4-code-analysis-and-roslyn-analyzers">Technique 4: Code Analysis and Roslyn Analyzers</h3>
<p>While there is no built-in Roslyn analyzer specifically for LSP, you can write custom analyzers that flag common patterns:</p>
<ul>
<li>Methods that throw <code>NotImplementedException</code></li>
<li>Override methods that throw exceptions the base class does not declare</li>
<li>Override methods with <code>if (someCondition) throw</code> at the top (strengthened preconditions)</li>
<li>Classes that implement an interface but <code>new</code>-hide methods instead of implementing them</li>
</ul>
<h3 id="technique-5-review-virtual-method-overrides">Technique 5: Review Virtual Method Overrides</h3>
<p>During code review, pay special attention to every <code>override</code> keyword. Ask:</p>
<ol>
<li>Does this override accept all inputs the base method accepts?</li>
<li>Does this override produce all outputs the base method promises?</li>
<li>Does this override maintain all invariants the base class establishes?</li>
<li>Does this override throw only exceptions the base class allows?</li>
</ol>
<p>If the answer to any question is &quot;no,&quot; you have found a violation.</p>
<h2 id="part-11-lsp-and-the-other-solid-principles">Part 11: LSP and the Other SOLID Principles</h2>
<p>LSP does not exist in isolation. It interacts with every other SOLID principle.</p>
<h3 id="single-responsibility-principle-srp-and-lsp">Single Responsibility Principle (SRP) and LSP</h3>
<p>A class with too many responsibilities is harder to subtype correctly, because the subclass must honor contracts across all those responsibilities. Keeping classes focused (SRP) makes LSP compliance easier.</p>
<h3 id="openclosed-principle-ocp-and-lsp">Open/Closed Principle (OCP) and LSP</h3>
<p>OCP says: &quot;open for extension, closed for modification.&quot; LSP says: &quot;extensions must honor the base contract.&quot; Together they mean: you can add new behavior through subtyping, but only if the new type is a valid substitute for the base type. OCP tells you <em>to</em> extend; LSP tells you <em>how</em> to extend safely.</p>
<h3 id="interface-segregation-principle-isp-and-lsp">Interface Segregation Principle (ISP) and LSP</h3>
<p>ISP says: &quot;don't force implementations to depend on methods they don't use.&quot; When interfaces are bloated, implementors are tempted to throw <code>NotImplementedException</code> from methods they cannot meaningfully implement — which violates LSP. Segregating interfaces into smaller, focused ones makes it possible for every implementor to honor the full contract.</p>
<p>As we saw with the payment gateway example: splitting <code>IPaymentGateway</code> into <code>IPaymentGateway</code> and <code>IRefundableGateway</code> simultaneously satisfies ISP and LSP.</p>
<h3 id="dependency-inversion-principle-dip-and-lsp">Dependency Inversion Principle (DIP) and LSP</h3>
<p>DIP says: &quot;depend on abstractions, not concretions.&quot; LSP says: &quot;those abstractions are only useful if all implementations honor their contracts.&quot; DIP without LSP is just indirection for indirection's sake — you depend on an interface, but the implementations behind it behave unpredictably. LSP makes DIP trustworthy.</p>
<h2 id="part-12-lsp-in-functional-and-hybrid-styles">Part 12: LSP in Functional and Hybrid Styles</h2>
<p>Modern C# is increasingly functional, with pattern matching, records, expression-bodied members, and LINQ everywhere. Does LSP still matter when you are writing functional-style code?</p>
<p>Yes, but the vocabulary changes.</p>
<p>In functional programming, the equivalent of LSP is that functions with the same type signature should be interchangeable if they are used in the same context. A <code>Func&lt;int, int&gt;</code> that represents &quot;double the input&quot; and a <code>Func&lt;int, int&gt;</code> that represents &quot;square the input&quot; are both valid substitutions in any context that accepts <code>Func&lt;int, int&gt;</code> — as long as the calling code does not depend on specific behavior beyond &quot;takes an int, returns an int.&quot;</p>
<p>Higher-order functions rely on LSP implicitly:</p>
<pre><code class="language-csharp">public IEnumerable&lt;T&gt; Filter&lt;T&gt;(
    IEnumerable&lt;T&gt; source,
    Func&lt;T, bool&gt; predicate)
{
    foreach (var item in source)
    {
        if (predicate(item))
            yield return item;
    }
}
</code></pre>
<p>This works with <em>any</em> predicate because the contract of <code>Func&lt;T, bool&gt;</code> is simply &quot;takes a <code>T</code>, returns a <code>bool</code>.&quot; A predicate that throws half the time, or that has side effects like deleting files, technically satisfies the type signature but violates the implicit behavioral contract of &quot;a pure test function.&quot;</p>
<h3 id="discriminated-unions-and-exhaustive-matching">Discriminated Unions and Exhaustive Matching</h3>
<p>When you model variants with a closed hierarchy and pattern matching, LSP is satisfied by construction — every variant is known and every case is handled:</p>
<pre><code class="language-csharp">public abstract record PaymentResult;
public record PaymentSucceeded(string TransactionId) : PaymentResult;
public record PaymentFailed(string Reason) : PaymentResult;
public record PaymentPending(string CheckUrl) : PaymentResult;

public string Describe(PaymentResult result) =&gt; result switch
{
    PaymentSucceeded s =&gt; $&quot;Paid! Transaction: {s.TransactionId}&quot;,
    PaymentFailed f =&gt; $&quot;Failed: {f.Reason}&quot;,
    PaymentPending p =&gt; $&quot;Pending. Check at: {p.CheckUrl}&quot;,
    _ =&gt; throw new UnreachableException()
};
</code></pre>
<p>Each variant is a valid substitution for <code>PaymentResult</code>. The exhaustive <code>switch</code> ensures every variant is handled. This is LSP-by-design.</p>
<h2 id="part-13-common-pitfalls-and-how-to-avoid-them">Part 13: Common Pitfalls and How to Avoid Them</h2>
<h3 id="pitfall-1-confusing-is-a-in-the-real-world-with-is-a-in-code">Pitfall 1: Confusing &quot;Is-A&quot; in the Real World with &quot;Is-A&quot; in Code</h3>
<p>A square <em>is</em> a rectangle in geometry. An ostrich <em>is</em> a bird in biology. But that does not mean <code>Square</code> should inherit from <code>Rectangle</code>, or <code>Ostrich</code> should inherit from <code>Bird</code> if <code>Bird</code> has a <code>Fly()</code> method.</p>
<p>The &quot;is-a&quot; relationship in code means &quot;can be substituted for.&quot; Ask the substitution question, not the taxonomy question: &quot;Can I use a <code>Square</code> everywhere I use a <code>Rectangle</code> without changing behavior?&quot; If the answer is no, do not use inheritance.</p>
<h3 id="pitfall-2-inheriting-for-code-reuse-not-substitutability">Pitfall 2: Inheriting for Code Reuse, Not Substitutability</h3>
<p>Inheritance is often used as a code reuse mechanism: &quot;I need these five methods from <code>BaseService</code>, so I will inherit from it.&quot; But inheritance creates a subtyping relationship, and now your class must honor the entire contract of <code>BaseService</code>. If you only want code reuse, use composition:</p>
<pre><code class="language-csharp">// Don't do this:
public class SpecialOrderService : OrderService { }

// Do this instead:
public class SpecialOrderService
{
    private readonly OrderService _orderService;

    public SpecialOrderService(OrderService orderService)
    {
        _orderService = orderService;
    }
}
</code></pre>
<h3 id="pitfall-3-sealing-too-late">Pitfall 3: Sealing Too Late</h3>
<p>If a class is not designed for inheritance, seal it. C# classes are unsealed by default, which invites subtyping. If your class has implicit contracts that are not documented (like &quot;setting <code>Width</code> does not change <code>Height</code>&quot;), a subclass will eventually violate them.</p>
<pre><code class="language-csharp">public sealed class Configuration
{
    public string ConnectionString { get; init; } = &quot;&quot;;
    public int MaxRetries { get; init; } = 3;
}
</code></pre>
<p>Starting with .NET 7, the runtime can optimize sealed classes more aggressively (devirtualization), so sealing is a performance win as well.</p>
<h3 id="pitfall-4-not-documenting-contracts">Pitfall 4: Not Documenting Contracts</h3>
<p>LSP violations often stem from undocumented contracts. If the only way to know that <code>Dispose()</code> must be idempotent is to read the implementation, some future implementor will get it wrong.</p>
<p>Use XML documentation comments to document preconditions, postconditions, and invariants:</p>
<pre><code class="language-csharp">public interface ICache&lt;TKey, TValue&gt; where TKey : notnull
{
    /// &lt;summary&gt;
    /// Retrieves a value from the cache.
    /// &lt;/summary&gt;
    /// &lt;param name=&quot;key&quot;&gt;The cache key. Must not be null.&lt;/param&gt;
    /// &lt;returns&gt;
    /// The cached value, or default(TValue) if the key is not found.
    /// Never throws on a missing key.
    /// &lt;/returns&gt;
    TValue? Get(TKey key);

    /// &lt;summary&gt;
    /// Adds or updates a value in the cache.
    /// &lt;/summary&gt;
    /// &lt;param name=&quot;key&quot;&gt;The cache key. Must not be null.&lt;/param&gt;
    /// &lt;param name=&quot;value&quot;&gt;The value to cache. May be null.&lt;/param&gt;
    /// &lt;remarks&gt;
    /// Postcondition: After Set returns, Get(key) returns value
    /// (or an equivalent, if the cache performs serialization).
    /// &lt;/remarks&gt;
    void Set(TKey key, TValue value);
}
</code></pre>
<h3 id="pitfall-5-ignoring-lsp-in-test-doubles">Pitfall 5: Ignoring LSP in Test Doubles</h3>
<p>Mocks and stubs are subtype implementations used in tests. If your mock violates the contract of the interface it implements, your tests may pass even when the production implementation has bugs — or your tests may fail for reasons unrelated to the code under test.</p>
<pre><code class="language-csharp">// BAD mock: violates the contract that Get never throws on missing key
public class BadMockCache : ICache&lt;string, string&gt;
{
    public string? Get(string key) =&gt;
        throw new KeyNotFoundException(); // Contract says: return default, don't throw

    public void Set(string key, string value) { }
}

// GOOD mock: honors the contract
public class GoodMockCache : ICache&lt;string, string&gt;
{
    private readonly Dictionary&lt;string, string&gt; _store = new();

    public string? Get(string key) =&gt;
        _store.TryGetValue(key, out var value) ? value : default;

    public void Set(string key, string value) =&gt;
        _store[key] = value;
}
</code></pre>
<h2 id="part-14-a-practical-checklist">Part 14: A Practical Checklist</h2>
<p>When designing a new class hierarchy or implementing an interface, run through this checklist:</p>
<p><strong>Before writing the subtype:</strong></p>
<ol>
<li>Have I documented the preconditions, postconditions, and invariants of the base type or interface?</li>
<li>Is the &quot;is-a&quot; relationship genuine in the behavioral sense, not just the taxonomic sense?</li>
<li>Could I achieve my goal with composition instead of inheritance?</li>
<li>If I am inheriting from a concrete class, is it designed for inheritance (not sealed, virtual methods documented)?</li>
</ol>
<p><strong>While writing the subtype:</strong></p>
<ol start="5">
<li>Do all overridden methods accept <em>at least</em> the same range of inputs as the base?</li>
<li>Do all overridden methods produce <em>at least</em> the same guarantees on output as the base?</li>
<li>Do I maintain all invariants from the base class?</li>
<li>Do I throw only exception types that the base class contract allows?</li>
<li>Am I introducing any new state that contradicts the base class's immutability or state-transition rules?</li>
</ol>
<p><strong>After writing the subtype:</strong></p>
<ol start="10">
<li>Can I pass my subtype to every method that accepts the base type and have all existing tests pass?</li>
<li>Have I written contract tests that verify my implementation against the interface's behavioral contract?</li>
<li>Have I tested with <code>null</code> inputs, empty collections, boundary values, and failure scenarios?</li>
</ol>
<h2 id="part-15-lsp-in-the-age-of-source-generators-interceptors-and-ai">Part 15: LSP in the Age of Source Generators, Interceptors, and AI</h2>
<p>Modern .NET development is evolving rapidly. Source generators can create implementations of interfaces at compile time. Interceptors can replace method implementations transparently. AI coding assistants generate implementations from interface definitions. In each case, LSP remains the quality gate.</p>
<p>A source-generated implementation of <code>IRepository&lt;T&gt;</code> must honor the same contracts as a hand-written one. An interceptor that replaces a caching layer must maintain the same preconditions and postconditions. An AI-generated implementation of <code>INotificationService</code> must satisfy the same contract tests.</p>
<p>The tooling changes. The principle does not.</p>
<p>If anything, LSP becomes <em>more</em> important as code generation increases. When humans write every line, they bring context and judgment. When code is generated — whether by a T4 template, a Roslyn source generator, or an LLM — the behavioral contract is the only thing ensuring correctness. Write clear contracts. Write contract tests. Let the principle do its work.</p>
<h2 id="part-16-resources-and-further-reading">Part 16: Resources and Further Reading</h2>
<p>Here are authoritative references for deeper study:</p>
<ul>
<li><strong>Barbara Liskov and Jeannette Wing, &quot;A Behavioral Notion of Subtyping&quot; (1994)</strong> — The foundational paper. Published in ACM Transactions on Programming Languages and Systems, Vol. 16, No. 6.</li>
<li><strong>Robert C. Martin, &quot;Design Principles and Design Patterns&quot; (2000)</strong> — The paper that collected the five principles that became SOLID.</li>
<li><strong>Robert C. Martin, <em>Agile Software Development: Principles, Patterns, and Practices</em> (2002)</strong> — Chapter 10 covers LSP with detailed C++ and Java examples.</li>
<li><strong>Barbara Liskov, &quot;Data Abstraction and Hierarchy&quot; (1987)</strong> — The original OOPSLA keynote, published in SIGPLAN Notices.</li>
<li><strong>Bertrand Meyer, <em>Object-Oriented Software Construction</em> (1988, 2nd ed. 1997)</strong> — Introduces Design by Contract, which provides the vocabulary (preconditions, postconditions, invariants) used to formalize LSP.</li>
<li><strong>Microsoft C# Documentation — Covariance and Contravariance in Generics</strong>: <a href="https://learn.microsoft.com/en-us/dotnet/standard/generics/covariance-and-contravariance">https://learn.microsoft.com/en-us/dotnet/standard/generics/covariance-and-contravariance</a></li>
<li><strong>Microsoft .NET Design Guidelines — Choosing Between Class and Struct</strong>: <a href="https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/">https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/</a></li>
<li><strong>Barbara Liskov — ACM Turing Award Laureate Profile</strong>: <a href="https://amturing.acm.org/award_winners/liskov_1108679.cfm">https://amturing.acm.org/award_winners/liskov_1108679.cfm</a></li>
<li><strong>SOLID Principles — Wikipedia</strong>: <a href="https://en.wikipedia.org/wiki/SOLID">https://en.wikipedia.org/wiki/SOLID</a></li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>The Liskov Substitution Principle is not about rectangles and squares. It is not an academic curiosity. It is the invisible contract that makes polymorphism — the most powerful feature of object-oriented programming — actually work.</p>
<p>Every time you write <code>ILogger logger</code> in a method signature, you are trusting that whatever implementation arrives at runtime will behave like a logger. Every time you register a service in the DI container, you are trusting that the concrete type honors the interface's contract. Every time you swap an adapter, a strategy, or a decorator, you are trusting that the new component is a valid substitute for the old one.</p>
<p>When that trust is justified — when every subtype honors every contract — your system is modular, testable, and resilient to change. When it is not — when subtypes throw unexpected exceptions, ignore parameters, break invariants, or strengthen preconditions — you get the kind of bugs that are hardest to diagnose: the ones that only appear when a specific subtype is used in a specific context that nobody anticipated.</p>
<p>Barbara Liskov's insight, first articulated at a conference in 1987, formalized in 1994, and adopted as a pillar of software design by 2000, remains as relevant today as it was then. The languages have changed. The frameworks have changed. The deployment targets have changed. But the need for behavioral substitutability — for types that keep their promises — has not changed, and never will.</p>
<p>Write clear contracts. Honor them in every implementation. Test them with contract tests. Seal what is not designed for extension. Prefer composition when inheritance does not fit. And the next time you see a <code>NotImplementedException</code>, treat it as a design smell, not a TODO — because somewhere downstream, someone is trusting your type to do what it says.</p>
<p>That trust is the Liskov Substitution Principle. Do not break it.</p>
]]></content:encoded>
      <category>solid</category>
      <category>design-principles</category>
      <category>csharp</category>
      <category>dotnet</category>
      <category>object-oriented-programming</category>
      <category>deep-dive</category>
    </item>
    <item>
      <title>The Open/Closed Principle: A Comprehensive Guide for .NET Developers</title>
      <link>https://observermagazine.github.io/blog/open-closed-principle</link>
      <description>A deep dive into the Open/Closed Principle — its origins with Bertrand Meyer in 1988, Robert C. Martin's reformulation in 1996, how to apply it in modern C# and ASP.NET Core with real code examples, which design patterns embody it, when to ignore it, and how it shapes testable, maintainable software architecture.</description>
      <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/open-closed-principle</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<p>It is a Thursday afternoon. You are three weeks into a feature that calculates shipping costs for an e-commerce application. The original developer wrote a tidy class called <code>ShippingCalculator</code> with a <code>switch</code> statement that handles three carriers: UPS, FedEx, and USPS. The class works. It has been in production for two years. It has unit tests. Everyone is happy.</p>
<p>Then your product owner walks over and says, &quot;We're adding DHL. And Amazon Logistics. And a regional carrier called OnTrac. Oh, and we need to support freight shipping for palletized orders. Can you have that done by next sprint?&quot;</p>
<p>You open <code>ShippingCalculator.cs</code>. It is 400 lines long. The <code>switch</code> has grown tentacles. Every carrier's logic references shared local variables. The unit tests are brittle — each one constructs a fake order and asserts against a hardcoded dollar amount that was correct in 2024. You add the DHL case. A FedEx test breaks. You fix the FedEx test. The USPS case now returns the wrong surcharge. You spend the rest of the afternoon playing whack-a-mole with regressions.</p>
<p>This is the problem that the Open/Closed Principle exists to prevent.</p>
<h2 id="part-1-what-is-the-openclosed-principle">Part 1: What Is the Open/Closed Principle?</h2>
<p>The Open/Closed Principle (OCP) is one of the five SOLID principles of object-oriented design. Its canonical formulation is deceptively simple:</p>
<blockquote>
<p>Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.</p>
</blockquote>
<p>That single sentence has generated more conference talks, blog posts, and heated Slack arguments than perhaps any other principle in software engineering. Let us break it down.</p>
<p><strong>Open for extension</strong> means you can add new behavior to the entity. You can teach it new tricks. You can make it handle cases it did not handle before.</p>
<p><strong>Closed for modification</strong> means you should not have to crack open the existing source code and change it to add that new behavior. The existing code — the code that is tested, deployed, and working in production — stays untouched.</p>
<p>The word &quot;should&quot; is doing heavy lifting here. The OCP is a principle, not a law. It describes an ideal to design toward, not an absolute rule that can never be broken. But when you manage to achieve it, the results are remarkable: new features arrive by writing new code, not by rewriting old code. Regressions drop. Deployments get smaller. Code reviews get easier. Your Thursday afternoons get less stressful.</p>
<h3 id="the-o-in-solid">The &quot;O&quot; in SOLID</h3>
<p>The SOLID acronym represents five principles that Robert C. Martin (widely known as Uncle Bob) consolidated in his 2000 paper <em>Design Principles and Design Patterns</em>. The acronym itself was coined around 2004 by Michael Feathers, who rearranged the initials into a memorable word:</p>
<ul>
<li><strong>S</strong> — Single Responsibility Principle (SRP)</li>
<li><strong>O</strong> — Open/Closed Principle (OCP)</li>
<li><strong>L</strong> — Liskov Substitution Principle (LSP)</li>
<li><strong>I</strong> — Interface Segregation Principle (ISP)</li>
<li><strong>D</strong> — Dependency Inversion Principle (DIP)</li>
</ul>
<p>The five principles are deeply interrelated. The OCP tells you what your goal is: build software that can be extended without modification. The Dependency Inversion Principle tells you how to get there: depend on abstractions, not concretions. The Liskov Substitution Principle tells you the rules your abstractions must follow. The Interface Segregation Principle tells you how to keep those abstractions lean. And the Single Responsibility Principle tells you how to scope each module so that extension points align with likely axes of change.</p>
<p>Think of SOLID as a constellation, not a checklist. The principles reinforce each other, and understanding the OCP in isolation is like understanding one star without seeing the pattern it belongs to.</p>
<h2 id="part-2-a-brief-history-from-meyer-to-martin">Part 2: A Brief History — From Meyer to Martin</h2>
<h3 id="bertrand-meyer-and-the-original-formulation-1988">Bertrand Meyer and the Original Formulation (1988)</h3>
<p>The Open/Closed Principle was first articulated by Bertrand Meyer in his 1988 book <em>Object-Oriented Software Construction</em>. Meyer was writing at a time when the software industry was grappling with a fundamental problem: libraries were hard to evolve. If you shipped a compiled library and a client depended on it, adding a field to a data structure or a method to a class could break every program that used that library. Recompilation cascades were real and expensive.</p>
<p>Meyer proposed a solution rooted in inheritance. His formulation went something like this: a class is <em>closed</em> because it can be compiled, stored in a library, baselined, and used by other classes without fear of change. But it is also <em>open</em> because any new class can inherit from it and add new fields, new methods, and new behavior — without modifying the original class or disturbing its existing clients.</p>
<p>In Meyer's world, the mechanism for achieving OCP was <em>implementation inheritance</em>. You extend behavior by subclassing. The parent class stays frozen. The child class adds what is new.</p>
<p>This was a reasonable idea in 1988. The dominant paradigm was procedural programming. Object-oriented languages like Eiffel (which Meyer himself created) and early C++ were still proving their worth. Inheritance was the exciting new tool, and Meyer wielded it well.</p>
<h3 id="robert-c.martin-and-the-polymorphic-reformulation-1996">Robert C. Martin and the Polymorphic Reformulation (1996)</h3>
<p>By the mid-1990s, the software industry had learned some hard lessons about implementation inheritance. Deep inheritance hierarchies created tight coupling. The &quot;fragile base class problem&quot; — where changes to a parent class broke child classes in unexpected ways — became a recognized anti-pattern. Developers began to favor composition over inheritance, and interfaces over concrete base classes.</p>
<p>In 1996, Robert C. Martin published an article titled &quot;The Open-Closed Principle&quot; that reframed Meyer's idea for this new reality. Martin kept the core insight — software should be extensible without modification — but changed the mechanism. Instead of relying on implementation inheritance, Martin advocated for <em>abstracted interfaces</em>. You define a contract (an interface or an abstract base class), and then you create multiple implementations that can be polymorphically substituted for each other. The interface is closed to modification. New implementations are open for extension.</p>
<p>This is the version of the OCP that most developers know today. When someone says &quot;follow the Open/Closed Principle,&quot; they almost always mean Martin's polymorphic formulation, not Meyer's inheritance-based one.</p>
<h3 id="why-the-distinction-matters">Why the Distinction Matters</h3>
<p>The difference between Meyer's OCP and Martin's OCP is not merely academic. It changes how you write code.</p>
<p>Meyer's approach says: &quot;Here is a concrete class. Subclass it to add behavior.&quot; This leads to class hierarchies. It works well when the base class is genuinely designed for inheritance (think <code>Stream</code> in .NET, or <code>HttpMessageHandler</code>), but it falls apart when developers start subclassing everything in sight and end up with six levels of inheritance just to add a logging statement.</p>
<p>Martin's approach says: &quot;Here is an interface. Implement it to add behavior.&quot; This leads to flat, composable architectures. It works well with dependency injection containers, plugin systems, and microservice boundaries. It is the approach that modern C# and ASP.NET Core are designed around.</p>
<p>Both formulations are valid. Both have their place. But for the rest of this article, when we say &quot;OCP,&quot; we mean Martin's polymorphic formulation unless otherwise noted — because that is what you will use every day as a .NET developer.</p>
<h2 id="part-3-the-problem-code-that-violates-the-ocp">Part 3: The Problem — Code That Violates the OCP</h2>
<p>Before we talk about how to follow the OCP, let us spend some time understanding what happens when you do not. Violations of the OCP are everywhere, and they tend to follow a few recognizable patterns.</p>
<h3 id="pattern-1-the-giant-switch-statement">Pattern 1: The Giant Switch Statement</h3>
<p>This is the most common violation. You have a method that does different things based on a type discriminator, and every time a new type appears, you add another case.</p>
<pre><code class="language-csharp">public class InvoicePrinter
{
    public string Print(Invoice invoice)
    {
        switch (invoice.Type)
        {
            case InvoiceType.Standard:
                return FormatStandardInvoice(invoice);
            case InvoiceType.Recurring:
                return FormatRecurringInvoice(invoice);
            case InvoiceType.ProForma:
                return FormatProFormaInvoice(invoice);
            // When the business adds &quot;Credit Note&quot; next quarter,
            // you will be right back in this file adding another case.
            default:
                throw new ArgumentOutOfRangeException(
                    nameof(invoice.Type),
                    $&quot;Unknown invoice type: {invoice.Type}&quot;);
        }
    }

    private string FormatStandardInvoice(Invoice invoice) { /* ... */ }
    private string FormatRecurringInvoice(Invoice invoice) { /* ... */ }
    private string FormatProFormaInvoice(Invoice invoice) { /* ... */ }
}
</code></pre>
<p>Every time a new invoice type is introduced, this class must be modified. That means recompiling, retesting, and redeploying the module that contains it — even though the existing invoice types have not changed at all.</p>
<h3 id="pattern-2-the-if-else-chain">Pattern 2: The If-Else Chain</h3>
<p>A close cousin of the switch statement. Instead of switching on an enum, you check conditions or types directly.</p>
<pre><code class="language-csharp">public decimal CalculateDiscount(Customer customer, decimal orderTotal)
{
    if (customer.Tier == &quot;Gold&quot;)
    {
        return orderTotal * 0.15m;
    }
    else if (customer.Tier == &quot;Silver&quot;)
    {
        return orderTotal * 0.10m;
    }
    else if (customer.Tier == &quot;Bronze&quot;)
    {
        return orderTotal * 0.05m;
    }
    else if (customer.Tier == &quot;Employee&quot;)
    {
        return orderTotal * 0.25m;
    }
    else
    {
        return 0m;
    }
}
</code></pre>
<p>This code works perfectly — until the business invents a &quot;Platinum&quot; tier, or a &quot;Loyalty Program&quot; tier, or a &quot;Black Friday Override&quot; tier. Each addition requires modifying this method.</p>
<h3 id="pattern-3-the-type-checking-method">Pattern 3: The Type-Checking Method</h3>
<p>This one is especially insidious because it often hides behind the <code>is</code> keyword in C#.</p>
<pre><code class="language-csharp">public void ProcessPayment(IPayment payment)
{
    if (payment is CreditCardPayment cc)
    {
        ChargeCreditCard(cc.CardNumber, cc.Amount);
    }
    else if (payment is BankTransferPayment bt)
    {
        InitiateBankTransfer(bt.Iban, bt.Amount);
    }
    else if (payment is CryptoPayment crypto)
    {
        SendCrypto(crypto.WalletAddress, crypto.Amount);
    }
    else
    {
        throw new NotSupportedException(
            $&quot;Payment type {payment.GetType().Name} is not supported.&quot;);
    }
}
</code></pre>
<p>You have an interface (<code>IPayment</code>), which looks like you are following the OCP. But then you immediately undermine it by checking the concrete type and branching. The interface is just window dressing. This method still needs to be modified every time a new payment type is added.</p>
<h3 id="why-do-these-violations-happen">Why Do These Violations Happen?</h3>
<p>They happen because they are the <em>easiest</em> thing to write in the moment. When you have one or two cases, a <code>switch</code> or <code>if-else</code> is perfectly readable. It is only when the third, fourth, and tenth cases arrive that the pain becomes acute. The OCP is fundamentally about anticipating change — not in a crystal-ball way, but in a &quot;what kind of change is likely in this domain?&quot; way.</p>
<p>The shipping calculator will probably need new carriers. The invoice printer will probably need new invoice types. The payment processor will probably need new payment methods. If you can see the axis of change, you can design for it.</p>
<h2 id="part-4-applying-the-ocp-in-c-the-basics">Part 4: Applying the OCP in C# — The Basics</h2>
<p>Let us fix the violations from Part 3. The core technique is always the same: extract the varying behavior behind an abstraction, and let new behavior arrive as new implementations of that abstraction.</p>
<h3 id="step-1-define-an-abstraction">Step 1: Define an Abstraction</h3>
<p>Start by identifying the behavior that changes. In the invoice printer example, the thing that changes is how each invoice type is formatted. So we define an interface for that behavior:</p>
<pre><code class="language-csharp">public interface IInvoiceFormatter
{
    InvoiceType SupportedType { get; }
    string Format(Invoice invoice);
}
</code></pre>
<h3 id="step-2-implement-the-abstraction-for-each-case">Step 2: Implement the Abstraction for Each Case</h3>
<p>Each existing case in the switch statement becomes its own class:</p>
<pre><code class="language-csharp">public class StandardInvoiceFormatter : IInvoiceFormatter
{
    public InvoiceType SupportedType =&gt; InvoiceType.Standard;

    public string Format(Invoice invoice)
    {
        // All the logic that was in FormatStandardInvoice()
        var sb = new StringBuilder();
        sb.AppendLine($&quot;INVOICE #{invoice.Number}&quot;);
        sb.AppendLine($&quot;Date: {invoice.Date:yyyy-MM-dd}&quot;);
        sb.AppendLine($&quot;Customer: {invoice.CustomerName}&quot;);
        sb.AppendLine();
        foreach (var line in invoice.Lines)
        {
            sb.AppendLine($&quot;  {line.Description,-40} {line.Amount,12:C}&quot;);
        }
        sb.AppendLine(new string('-', 54));
        sb.AppendLine($&quot;  {&quot;Total&quot;,-40} {invoice.Total,12:C}&quot;);
        return sb.ToString();
    }
}

public class RecurringInvoiceFormatter : IInvoiceFormatter
{
    public InvoiceType SupportedType =&gt; InvoiceType.Recurring;

    public string Format(Invoice invoice)
    {
        var sb = new StringBuilder();
        sb.AppendLine($&quot;RECURRING INVOICE #{invoice.Number}&quot;);
        sb.AppendLine($&quot;Billing Period: {invoice.PeriodStart:MMM yyyy} - {invoice.PeriodEnd:MMM yyyy}&quot;);
        sb.AppendLine($&quot;Next Charge: {invoice.NextChargeDate:yyyy-MM-dd}&quot;);
        sb.AppendLine($&quot;Customer: {invoice.CustomerName}&quot;);
        sb.AppendLine();
        foreach (var line in invoice.Lines)
        {
            sb.AppendLine($&quot;  {line.Description,-40} {line.Amount,12:C}&quot;);
        }
        sb.AppendLine(new string('-', 54));
        sb.AppendLine($&quot;  {&quot;Monthly Total&quot;,-40} {invoice.Total,12:C}&quot;);
        return sb.ToString();
    }
}

public class ProFormaInvoiceFormatter : IInvoiceFormatter
{
    public InvoiceType SupportedType =&gt; InvoiceType.ProForma;

    public string Format(Invoice invoice)
    {
        var sb = new StringBuilder();
        sb.AppendLine(&quot;*** PRO FORMA — NOT A TAX INVOICE ***&quot;);
        sb.AppendLine($&quot;Estimate #{invoice.Number}&quot;);
        sb.AppendLine($&quot;Valid Until: {invoice.ExpiryDate:yyyy-MM-dd}&quot;);
        sb.AppendLine($&quot;Prepared For: {invoice.CustomerName}&quot;);
        sb.AppendLine();
        foreach (var line in invoice.Lines)
        {
            sb.AppendLine($&quot;  {line.Description,-40} {line.Amount,12:C}&quot;);
        }
        sb.AppendLine(new string('-', 54));
        sb.AppendLine($&quot;  {&quot;Estimated Total&quot;,-40} {invoice.Total,12:C}&quot;);
        return sb.ToString();
    }
}
</code></pre>
<h3 id="step-3-compose-via-the-abstraction">Step 3: Compose via the Abstraction</h3>
<p>Now the <code>InvoicePrinter</code> depends only on the interface, not on any specific formatter:</p>
<pre><code class="language-csharp">public class InvoicePrinter
{
    private readonly IReadOnlyDictionary&lt;InvoiceType, IInvoiceFormatter&gt; _formatters;

    public InvoicePrinter(IEnumerable&lt;IInvoiceFormatter&gt; formatters)
    {
        _formatters = formatters.ToDictionary(f =&gt; f.SupportedType);
    }

    public string Print(Invoice invoice)
    {
        if (!_formatters.TryGetValue(invoice.Type, out var formatter))
        {
            throw new NotSupportedException(
                $&quot;No formatter registered for invoice type '{invoice.Type}'.&quot;);
        }

        return formatter.Format(invoice);
    }
}
</code></pre>
<p>This class is now <strong>closed for modification</strong>. You will never need to change it again (unless the fundamental concept of &quot;invoice printing&quot; itself changes, which is a different kind of change — more on that later). And it is <strong>open for extension</strong>: when the business adds &quot;Credit Note&quot; as a new invoice type, you write a single new class:</p>
<pre><code class="language-csharp">public class CreditNoteFormatter : IInvoiceFormatter
{
    public InvoiceType SupportedType =&gt; InvoiceType.CreditNote;

    public string Format(Invoice invoice)
    {
        var sb = new StringBuilder();
        sb.AppendLine(&quot;*** CREDIT NOTE ***&quot;);
        sb.AppendLine($&quot;Credit Note #{invoice.Number}&quot;);
        sb.AppendLine($&quot;Original Invoice: #{invoice.OriginalInvoiceNumber}&quot;);
        sb.AppendLine($&quot;Customer: {invoice.CustomerName}&quot;);
        sb.AppendLine();
        foreach (var line in invoice.Lines)
        {
            sb.AppendLine($&quot;  {line.Description,-40} {line.Amount,12:C}&quot;);
        }
        sb.AppendLine(new string('-', 54));
        sb.AppendLine($&quot;  {&quot;Credit Total&quot;,-40} {invoice.Total,12:C}&quot;);
        return sb.ToString();
    }
}
</code></pre>
<p>Register it in your dependency injection container, and you are done. The <code>InvoicePrinter</code> never knew it existed, never needed to be recompiled, and never needed to be retested. The only new code is the <code>CreditNoteFormatter</code> itself and its own unit tests.</p>
<h3 id="step-4-wire-it-up-in-di">Step 4: Wire It Up in DI</h3>
<p>In ASP.NET Core (or any application using <code>Microsoft.Extensions.DependencyInjection</code>), registration looks like this:</p>
<pre><code class="language-csharp">builder.Services.AddSingleton&lt;IInvoiceFormatter, StandardInvoiceFormatter&gt;();
builder.Services.AddSingleton&lt;IInvoiceFormatter, RecurringInvoiceFormatter&gt;();
builder.Services.AddSingleton&lt;IInvoiceFormatter, ProFormaInvoiceFormatter&gt;();
builder.Services.AddSingleton&lt;IInvoiceFormatter, CreditNoteFormatter&gt;();

builder.Services.AddSingleton&lt;InvoicePrinter&gt;();
</code></pre>
<p>When the DI container resolves <code>InvoicePrinter</code>, it will inject an <code>IEnumerable&lt;IInvoiceFormatter&gt;</code> containing all registered formatters. The printer builds its dictionary and is ready to go.</p>
<p>This is the textbook OCP refactoring. It works for the discount calculator (extract an <code>IDiscountStrategy</code> interface), for the payment processor (let each <code>IPayment</code> implementation carry its own <code>Process()</code> method), and for the shipping calculator that started this article (extract an <code>IShippingRateProvider</code> interface with one implementation per carrier).</p>
<h2 id="part-5-design-patterns-that-embody-the-ocp">Part 5: Design Patterns That Embody the OCP</h2>
<p>The OCP is not just a principle — it is the conceptual foundation beneath many of the classic design patterns from the Gang of Four book and beyond. If you have ever used one of these patterns, you were following the OCP, even if you did not call it by name.</p>
<h3 id="strategy-pattern">Strategy Pattern</h3>
<p>The Strategy pattern is the most direct expression of the OCP. You define a family of algorithms (strategies), encapsulate each one behind a common interface, and make them interchangeable. The context class (the one that uses the strategy) never changes when a new strategy is added.</p>
<p>We already saw this with the invoice formatter example. Here is another example — a file compression service:</p>
<pre><code class="language-csharp">public interface ICompressionStrategy
{
    string FileExtension { get; }
    byte[] Compress(byte[] data);
    byte[] Decompress(byte[] data);
}

public class GzipCompression : ICompressionStrategy
{
    public string FileExtension =&gt; &quot;.gz&quot;;

    public byte[] Compress(byte[] data)
    {
        using var output = new MemoryStream();
        using (var gzip = new GZipStream(output, CompressionLevel.Optimal))
        {
            gzip.Write(data, 0, data.Length);
        }
        return output.ToArray();
    }

    public byte[] Decompress(byte[] data)
    {
        using var input = new MemoryStream(data);
        using var gzip = new GZipStream(input, CompressionMode.Decompress);
        using var output = new MemoryStream();
        gzip.CopyTo(output);
        return output.ToArray();
    }
}

public class BrotliCompression : ICompressionStrategy
{
    public string FileExtension =&gt; &quot;.br&quot;;

    public byte[] Compress(byte[] data)
    {
        using var output = new MemoryStream();
        using (var brotli = new BrotliStream(output, CompressionLevel.Optimal))
        {
            brotli.Write(data, 0, data.Length);
        }
        return output.ToArray();
    }

    public byte[] Decompress(byte[] data)
    {
        using var input = new MemoryStream(data);
        using var brotli = new BrotliStream(input, CompressionMode.Decompress);
        using var output = new MemoryStream();
        brotli.CopyTo(output);
        return output.ToArray();
    }
}
</code></pre>
<p>Adding Zstandard compression next year? Write a <code>ZstdCompression</code> class. Nothing else changes.</p>
<h3 id="decorator-pattern">Decorator Pattern</h3>
<p>The Decorator pattern lets you wrap an existing object with additional behavior, without modifying the original. Each decorator implements the same interface as the object it wraps, so decorators are invisible to the consumer.</p>
<pre><code class="language-csharp">public interface IOrderRepository
{
    Task&lt;Order?&gt; GetByIdAsync(int id);
    Task SaveAsync(Order order);
}

// The base implementation — talks to the database
public class SqlOrderRepository : IOrderRepository
{
    private readonly DbContext _db;

    public SqlOrderRepository(DbContext db) =&gt; _db = db;

    public async Task&lt;Order?&gt; GetByIdAsync(int id)
        =&gt; await _db.Set&lt;Order&gt;().FindAsync(id);

    public async Task SaveAsync(Order order)
    {
        _db.Set&lt;Order&gt;().Update(order);
        await _db.SaveChangesAsync();
    }
}

// A decorator that adds caching — does not modify SqlOrderRepository
public class CachedOrderRepository : IOrderRepository
{
    private readonly IOrderRepository _inner;
    private readonly IMemoryCache _cache;
    private readonly ILogger&lt;CachedOrderRepository&gt; _logger;

    public CachedOrderRepository(
        IOrderRepository inner,
        IMemoryCache cache,
        ILogger&lt;CachedOrderRepository&gt; logger)
    {
        _inner = inner;
        _cache = cache;
        _logger = logger;
    }

    public async Task&lt;Order?&gt; GetByIdAsync(int id)
    {
        var cacheKey = $&quot;order:{id}&quot;;
        if (_cache.TryGetValue(cacheKey, out Order? cached))
        {
            _logger.LogDebug(&quot;Cache hit for order {OrderId}&quot;, id);
            return cached;
        }

        var order = await _inner.GetByIdAsync(id);
        if (order is not null)
        {
            _cache.Set(cacheKey, order, TimeSpan.FromMinutes(5));
        }

        return order;
    }

    public async Task SaveAsync(Order order)
    {
        await _inner.SaveAsync(order);
        _cache.Remove($&quot;order:{order.Id}&quot;);
    }
}

// A decorator that adds audit logging — does not modify either of the above
public class AuditedOrderRepository : IOrderRepository
{
    private readonly IOrderRepository _inner;
    private readonly IAuditLog _auditLog;

    public AuditedOrderRepository(IOrderRepository inner, IAuditLog auditLog)
    {
        _inner = inner;
        _auditLog = auditLog;
    }

    public Task&lt;Order?&gt; GetByIdAsync(int id) =&gt; _inner.GetByIdAsync(id);

    public async Task SaveAsync(Order order)
    {
        await _inner.SaveAsync(order);
        await _auditLog.RecordAsync(&quot;Order&quot;, order.Id, &quot;Saved&quot;);
    }
}
</code></pre>
<p>You can stack decorators: <code>AuditedOrderRepository</code> wrapping <code>CachedOrderRepository</code> wrapping <code>SqlOrderRepository</code>. Each layer adds behavior without modifying the layers beneath it. The <code>SqlOrderRepository</code> class does not know it is being cached or audited.</p>
<p>In ASP.NET Core DI, you can wire this up using the <code>Scrutor</code> library or manually:</p>
<pre><code class="language-csharp">builder.Services.AddScoped&lt;SqlOrderRepository&gt;();
builder.Services.AddScoped&lt;IOrderRepository&gt;(sp =&gt;
{
    var sql = sp.GetRequiredService&lt;SqlOrderRepository&gt;();
    var cache = sp.GetRequiredService&lt;IMemoryCache&gt;();
    var cacheLogger = sp.GetRequiredService&lt;ILogger&lt;CachedOrderRepository&gt;&gt;();
    var cached = new CachedOrderRepository(sql, cache, cacheLogger);
    var auditLog = sp.GetRequiredService&lt;IAuditLog&gt;();
    return new AuditedOrderRepository(cached, auditLog);
});
</code></pre>
<h3 id="template-method-pattern">Template Method Pattern</h3>
<p>The Template Method pattern defines the skeleton of an algorithm in a base class and lets subclasses override specific steps. This is one of the few places where Meyer's original inheritance-based OCP still shines.</p>
<pre><code class="language-csharp">public abstract class ReportGenerator
{
    // The template method — defines the algorithm's structure
    public string Generate(ReportData data)
    {
        var sb = new StringBuilder();
        sb.AppendLine(CreateHeader(data));
        sb.AppendLine(CreateBody(data));
        sb.AppendLine(CreateFooter(data));
        return sb.ToString();
    }

    protected abstract string CreateHeader(ReportData data);
    protected abstract string CreateBody(ReportData data);

    // A default implementation that subclasses can override if needed
    protected virtual string CreateFooter(ReportData data)
        =&gt; $&quot;Generated on {DateTime.UtcNow:yyyy-MM-dd HH:mm} UTC&quot;;
}

public class HtmlReportGenerator : ReportGenerator
{
    protected override string CreateHeader(ReportData data)
        =&gt; $&quot;&lt;html&gt;&lt;head&gt;&lt;title&gt;{data.Title}&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;h1&gt;{data.Title}&lt;/h1&gt;&quot;;

    protected override string CreateBody(ReportData data)
    {
        var sb = new StringBuilder(&quot;&lt;table&gt;&quot;);
        foreach (var row in data.Rows)
        {
            sb.Append(&quot;&lt;tr&gt;&quot;);
            foreach (var cell in row)
            {
                sb.Append($&quot;&lt;td&gt;{cell}&lt;/td&gt;&quot;);
            }
            sb.Append(&quot;&lt;/tr&gt;&quot;);
        }
        sb.Append(&quot;&lt;/table&gt;&quot;);
        return sb.ToString();
    }

    protected override string CreateFooter(ReportData data)
        =&gt; $&quot;&lt;footer&gt;Generated on {DateTime.UtcNow:yyyy-MM-dd HH:mm} UTC&lt;/footer&gt;&lt;/body&gt;&lt;/html&gt;&quot;;
}

public class CsvReportGenerator : ReportGenerator
{
    protected override string CreateHeader(ReportData data)
        =&gt; string.Join(&quot;,&quot;, data.ColumnNames);

    protected override string CreateBody(ReportData data)
    {
        var sb = new StringBuilder();
        foreach (var row in data.Rows)
        {
            sb.AppendLine(string.Join(&quot;,&quot;, row.Select(EscapeCsv)));
        }
        return sb.ToString();
    }

    private static string EscapeCsv(string value)
        =&gt; value.Contains(',') || value.Contains('&quot;')
            ? $&quot;\&quot;{value.Replace(&quot;\&quot;&quot;, &quot;\&quot;\&quot;&quot;)}\&quot;&quot;
            : value;
}
</code></pre>
<p>The <code>Generate()</code> method in <code>ReportGenerator</code> is closed for modification. The individual steps (<code>CreateHeader</code>, <code>CreateBody</code>, <code>CreateFooter</code>) are open for extension via subclassing.</p>
<h3 id="factory-method-pattern">Factory Method Pattern</h3>
<p>The Factory Method pattern delegates object creation to subclasses or to factory methods, so you can introduce new product types without modifying the code that consumes them.</p>
<pre><code class="language-csharp">public interface INotification
{
    Task SendAsync(string recipient, string message);
}

public class EmailNotification : INotification
{
    private readonly IEmailClient _emailClient;

    public EmailNotification(IEmailClient emailClient) =&gt; _emailClient = emailClient;

    public async Task SendAsync(string recipient, string message)
        =&gt; await _emailClient.SendAsync(recipient, &quot;Notification&quot;, message);
}

public class SmsNotification : INotification
{
    private readonly ISmsGateway _gateway;

    public SmsNotification(ISmsGateway gateway) =&gt; _gateway = gateway;

    public async Task SendAsync(string recipient, string message)
        =&gt; await _gateway.SendTextAsync(recipient, message);
}

public class PushNotification : INotification
{
    private readonly IPushService _pushService;

    public PushNotification(IPushService pushService) =&gt; _pushService = pushService;

    public async Task SendAsync(string recipient, string message)
        =&gt; await _pushService.PushAsync(recipient, message);
}
</code></pre>
<p>When the business says &quot;we need Slack notifications too,&quot; you write a <code>SlackNotification</code> class, register it, and nothing else needs to change.</p>
<h3 id="observer-pattern-events-and-delegates">Observer Pattern (Events and Delegates)</h3>
<p>C# has first-class support for the Observer pattern through events and delegates. This is OCP in action: the publisher defines an event, and any number of subscribers can attach to it without the publisher knowing or caring.</p>
<pre><code class="language-csharp">public class OrderService
{
    // The event — an extension point
    public event Func&lt;Order, Task&gt;? OrderPlaced;

    public async Task PlaceOrderAsync(Order order)
    {
        // Core business logic
        order.Status = OrderStatus.Placed;
        order.PlacedAt = DateTime.UtcNow;
        await _repository.SaveAsync(order);

        // Notify all subscribers — OrderService does not know who they are
        if (OrderPlaced is not null)
        {
            foreach (var handler in OrderPlaced.GetInvocationList().Cast&lt;Func&lt;Order, Task&gt;&gt;())
            {
                await handler(order);
            }
        }
    }
}
</code></pre>
<p>Subscribers attach from outside:</p>
<pre><code class="language-csharp">orderService.OrderPlaced += async order =&gt;
    await emailService.SendOrderConfirmationAsync(order);

orderService.OrderPlaced += async order =&gt;
    await inventoryService.ReserveStockAsync(order);

orderService.OrderPlaced += async order =&gt;
    await analyticsService.TrackOrderAsync(order);
</code></pre>
<p>Adding a new side effect to order placement does not require modifying <code>OrderService</code>. That is the OCP.</p>
<h2 id="part-6-the-ocp-in-asp.net-core">Part 6: The OCP in ASP.NET Core</h2>
<p>ASP.NET Core is one of the best examples of OCP-friendly architecture in the .NET ecosystem. Several of its core abstractions are explicitly designed so you can extend behavior without modifying framework code.</p>
<h3 id="the-middleware-pipeline">The Middleware Pipeline</h3>
<p>The ASP.NET Core request pipeline is a chain of middleware components. Each middleware processes the request, optionally calls the next middleware in the chain, and then processes the response on the way back out. The pipeline itself is closed for modification — the <code>WebApplication</code> class does not need to change when you add a new middleware. But it is open for extension — you can insert new middleware at any point in the chain.</p>
<pre><code class="language-csharp">var app = builder.Build();

// Each of these extends the pipeline without modifying any existing middleware
app.UseExceptionHandler(&quot;/Error&quot;);
app.UseHsts();
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();

// Your custom middleware — open for extension
app.UseMiddleware&lt;RequestTimingMiddleware&gt;();
app.UseMiddleware&lt;TenantResolutionMiddleware&gt;();

app.MapControllers();
app.Run();
</code></pre>
<p>Writing a custom middleware is adding new behavior without modifying any existing code:</p>
<pre><code class="language-csharp">public class RequestTimingMiddleware
{
    private readonly RequestDelegate _next;
    private readonly ILogger&lt;RequestTimingMiddleware&gt; _logger;

    public RequestTimingMiddleware(RequestDelegate next, ILogger&lt;RequestTimingMiddleware&gt; logger)
    {
        _next = next;
        _logger = logger;
    }

    public async Task InvokeAsync(HttpContext context)
    {
        var stopwatch = Stopwatch.StartNew();

        await _next(context);

        stopwatch.Stop();
        _logger.LogInformation(
            &quot;Request {Method} {Path} completed in {ElapsedMs}ms with status {StatusCode}&quot;,
            context.Request.Method,
            context.Request.Path,
            stopwatch.ElapsedMilliseconds,
            context.Response.StatusCode);
    }
}
</code></pre>
<h3 id="dependency-injection-and-service-registration">Dependency Injection and Service Registration</h3>
<p>The DI container in ASP.NET Core is itself an OCP-friendly system. You register services against interfaces, and consumers depend on those interfaces. When you need to swap an implementation — say, replacing an in-memory cache with Redis — you change the registration, not the consumer.</p>
<pre><code class="language-csharp">// Development: use in-memory
if (builder.Environment.IsDevelopment())
{
    builder.Services.AddSingleton&lt;ICacheService, InMemoryCacheService&gt;();
}
else
{
    // Production: use Redis — no consumer code changes
    builder.Services.AddSingleton&lt;ICacheService, RedisCacheService&gt;();
}
</code></pre>
<h3 id="configuration-and-options-pattern">Configuration and Options Pattern</h3>
<p>The Options pattern (<code>IOptions&lt;T&gt;</code>, <code>IOptionsSnapshot&lt;T&gt;</code>, <code>IOptionsMonitor&lt;T&gt;</code>) lets you extend application behavior through configuration without modifying code. Feature flags are a natural expression of the OCP:</p>
<pre><code class="language-csharp">public class FeatureFlags
{
    public bool EnableNewCheckoutFlow { get; set; }
    public bool EnableRecommendationEngine { get; set; }
    public bool EnableBetaDashboard { get; set; }
}

// In Program.cs
builder.Services.Configure&lt;FeatureFlags&gt;(
    builder.Configuration.GetSection(&quot;Features&quot;));

// In a controller or service
public class CheckoutController : ControllerBase
{
    private readonly IOptionsSnapshot&lt;FeatureFlags&gt; _features;

    public CheckoutController(IOptionsSnapshot&lt;FeatureFlags&gt; features)
        =&gt; _features = features;

    [HttpPost]
    public async Task&lt;IActionResult&gt; Checkout(CheckoutRequest request)
    {
        if (_features.Value.EnableNewCheckoutFlow)
        {
            return await NewCheckoutFlowAsync(request);
        }

        return await LegacyCheckoutFlowAsync(request);
    }
}
</code></pre>
<p>The <code>if</code> statement here might look like an OCP violation, but it is not — this is <em>feature toggling</em>, a controlled, temporary branching mechanism. The key distinction is that the toggle will be removed once the new flow is validated and the old flow is deleted. It is not a permanent, ever-growing branching mechanism like the switch statement in Part 3.</p>
<h3 id="minimal-apis-and-endpoint-filters">Minimal APIs and Endpoint Filters</h3>
<p>Minimal APIs in ASP.NET Core support endpoint filters, which are another expression of the OCP. You can attach cross-cutting behavior to endpoints without modifying the endpoint handler itself:</p>
<pre><code class="language-csharp">app.MapPost(&quot;/api/orders&quot;, async (CreateOrderRequest request, IOrderService service) =&gt;
{
    var order = await service.CreateAsync(request);
    return Results.Created($&quot;/api/orders/{order.Id}&quot;, order);
})
.AddEndpointFilter&lt;ValidationFilter&lt;CreateOrderRequest&gt;&gt;()
.AddEndpointFilter&lt;AuditLogFilter&gt;()
.RequireAuthorization(&quot;OrderCreator&quot;);
</code></pre>
<p>Each filter extends the endpoint's behavior. The handler itself does not know about validation, audit logging, or authorization. Those concerns are composed from outside.</p>
<h2 id="part-7-the-ocp-with-modern-c-features">Part 7: The OCP with Modern C# Features</h2>
<p>C# has evolved significantly since the OCP was first formulated. Several modern language features make it easier to follow the principle — and a few can tempt you into violating it.</p>
<h3 id="generics">Generics</h3>
<p>Generics are a powerful tool for building OCP-compliant abstractions. A generic interface or class can work with types that do not exist yet when the generic is written.</p>
<pre><code class="language-csharp">public interface IRepository&lt;T&gt; where T : class, IEntity
{
    Task&lt;T?&gt; GetByIdAsync(int id);
    Task&lt;IReadOnlyList&lt;T&gt;&gt; GetAllAsync();
    Task AddAsync(T entity);
    Task UpdateAsync(T entity);
    Task DeleteAsync(int id);
}

public class EfRepository&lt;T&gt; : IRepository&lt;T&gt; where T : class, IEntity
{
    private readonly AppDbContext _context;

    public EfRepository(AppDbContext context) =&gt; _context = context;

    public async Task&lt;T?&gt; GetByIdAsync(int id)
        =&gt; await _context.Set&lt;T&gt;().FindAsync(id);

    public async Task&lt;IReadOnlyList&lt;T&gt;&gt; GetAllAsync()
        =&gt; await _context.Set&lt;T&gt;().ToListAsync();

    public async Task AddAsync(T entity)
    {
        await _context.Set&lt;T&gt;().AddAsync(entity);
        await _context.SaveChangesAsync();
    }

    public async Task UpdateAsync(T entity)
    {
        _context.Set&lt;T&gt;().Update(entity);
        await _context.SaveChangesAsync();
    }

    public async Task DeleteAsync(int id)
    {
        var entity = await GetByIdAsync(id);
        if (entity is not null)
        {
            _context.Set&lt;T&gt;().Remove(entity);
            await _context.SaveChangesAsync();
        }
    }
}
</code></pre>
<p>When you add a new entity type (<code>Invoice</code>, <code>Customer</code>, <code>Product</code>), you do not modify <code>EfRepository&lt;T&gt;</code>. You just use it with the new type. That is OCP through generics.</p>
<h3 id="delegates-and-funcaction">Delegates and Func/Action</h3>
<p>You do not always need a full interface to achieve OCP. Sometimes a delegate is enough. Delegates are the smallest possible abstraction — a single method signature.</p>
<pre><code class="language-csharp">public class RetryHandler
{
    public async Task&lt;T&gt; ExecuteWithRetryAsync&lt;T&gt;(
        Func&lt;Task&lt;T&gt;&gt; operation,
        int maxRetries = 3,
        TimeSpan? delay = null)
    {
        var retryDelay = delay ?? TimeSpan.FromSeconds(1);

        for (int attempt = 1; attempt &lt;= maxRetries; attempt++)
        {
            try
            {
                return await operation();
            }
            catch (Exception ex) when (attempt &lt; maxRetries)
            {
                await Task.Delay(retryDelay * attempt);
            }
        }

        return await operation(); // Final attempt — let it throw
    }
}
</code></pre>
<p>This class can retry <em>any</em> async operation without knowing what that operation does. It is closed for modification. You extend it by passing in different <code>Func&lt;Task&lt;T&gt;&gt;</code> delegates — which is open for extension.</p>
<h3 id="extension-methods">Extension Methods</h3>
<p>Extension methods let you add behavior to existing types without modifying them. This is literally the OCP at the language level.</p>
<pre><code class="language-csharp">public static class StringExtensions
{
    public static string Truncate(this string value, int maxLength)
    {
        if (string.IsNullOrEmpty(value)) return value;
        return value.Length &lt;= maxLength
            ? value
            : value[..maxLength] + &quot;…&quot;;
    }

    public static string ToSlug(this string value)
    {
        var slug = value.ToLowerInvariant();
        slug = Regex.Replace(slug, @&quot;[^a-z0-9\s-]&quot;, &quot;&quot;);
        slug = Regex.Replace(slug, @&quot;\s+&quot;, &quot;-&quot;);
        slug = Regex.Replace(slug, @&quot;-+&quot;, &quot;-&quot;);
        return slug.Trim('-');
    }
}
</code></pre>
<p>The <code>string</code> class is closed for modification (you cannot change it — it is in the BCL). But it is open for extension via extension methods.</p>
<h3 id="a-word-of-caution-pattern-matching-and-switch-expressions">A Word of Caution: Pattern Matching and Switch Expressions</h3>
<p>C# has made pattern matching and switch expressions beautifully concise. This can actually make OCP violations <em>more</em> attractive, because they look so clean:</p>
<pre><code class="language-csharp">public decimal CalculateTax(Address address) =&gt; address.State switch
{
    &quot;CA&quot; =&gt; address.SubTotal * 0.0725m,
    &quot;TX&quot; =&gt; address.SubTotal * 0.0625m,
    &quot;NY&quot; =&gt; address.SubTotal * 0.08m,
    &quot;OR&quot; =&gt; 0m, // No sales tax
    _ =&gt; address.SubTotal * 0.05m
};
</code></pre>
<p>This is elegant, readable, and a clear OCP violation. Every time a state's tax rate changes or a new state is added, you modify this method. Whether that matters depends on context. If tax rates change frequently and the calculation is complex (considering county taxes, exemptions, thresholds), you should extract a strategy. If the rates are stable and the calculation is trivial, the switch expression might be perfectly fine. The OCP is a guide, not a religion.</p>
<h2 id="part-8-the-ocp-and-testability">Part 8: The OCP and Testability</h2>
<p>One of the most practical benefits of following the OCP is that it makes your code dramatically easier to test. When behavior is hidden behind abstractions, you can substitute test doubles (mocks, stubs, fakes) without any ceremony.</p>
<h3 id="testing-ocp-compliant-code">Testing OCP-Compliant Code</h3>
<p>Consider the <code>InvoicePrinter</code> from Part 4. Testing it is trivial because it depends on <code>IInvoiceFormatter</code>, not on concrete implementations:</p>
<pre><code class="language-csharp">public class InvoicePrinterTests
{
    [Fact]
    public void Print_UsesCorrectFormatterForInvoiceType()
    {
        // Arrange
        var invoice = new Invoice
        {
            Type = InvoiceType.Standard,
            Number = &quot;INV-001&quot;,
            CustomerName = &quot;Acme Corp&quot;,
            Lines = [new InvoiceLine(&quot;Widget&quot;, 99.99m)],
            Total = 99.99m
        };

        var mockFormatter = new TestInvoiceFormatter(
            InvoiceType.Standard,
            &quot;FORMATTED OUTPUT&quot;);

        var printer = new InvoicePrinter([mockFormatter]);

        // Act
        var result = printer.Print(invoice);

        // Assert
        Assert.Equal(&quot;FORMATTED OUTPUT&quot;, result);
    }

    [Fact]
    public void Print_ThrowsForUnregisteredInvoiceType()
    {
        // Arrange
        var invoice = new Invoice { Type = InvoiceType.CreditNote };
        var printer = new InvoicePrinter([]); // No formatters registered

        // Act &amp; Assert
        Assert.Throws&lt;NotSupportedException&gt;(() =&gt; printer.Print(invoice));
    }

    private class TestInvoiceFormatter : IInvoiceFormatter
    {
        private readonly string _output;
        public InvoiceType SupportedType { get; }

        public TestInvoiceFormatter(InvoiceType type, string output)
        {
            SupportedType = type;
            _output = output;
        }

        public string Format(Invoice invoice) =&gt; _output;
    }
}
</code></pre>
<p>Notice how the test does not need to know anything about how standard invoices are actually formatted. It tests the <em>printer's</em> behavior (routing to the correct formatter) in isolation. The formatter's behavior is tested separately, in <code>StandardInvoiceFormatterTests</code>.</p>
<h3 id="testing-without-ocp">Testing Without OCP</h3>
<p>Compare this to testing the original switch-based <code>InvoicePrinter</code>. You would need to construct a real invoice, call <code>Print()</code>, and assert against the actual formatted output. If the formatting logic changes, the test breaks. If you want to test the routing logic separately from the formatting logic, you cannot — they are entangled in the same method.</p>
<h3 id="the-ocp-makes-mocking-unnecessary-sometimes">The OCP Makes Mocking Unnecessary (Sometimes)</h3>
<p>When your abstractions are simple enough, you do not even need a mocking framework. The <code>TestInvoiceFormatter</code> above is a hand-written fake — it took four lines of code. This is often clearer than using Moq or NSubstitute, because the fake's behavior is explicit and visible in the test.</p>
<p>For more complex interactions, mocking frameworks still have their place. But the OCP ensures that the seams where you inject mocks are well-defined and stable.</p>
<h2 id="part-9-when-not-to-follow-the-ocp">Part 9: When NOT to Follow the OCP</h2>
<p>The OCP is a tool, not a commandment. There are legitimate situations where following it would make your code worse, not better.</p>
<h3 id="when-the-axis-of-change-is-unknown">When the Axis of Change Is Unknown</h3>
<p>The OCP requires you to predict <em>where</em> change will happen so you can place an abstraction there. If you guess wrong, you end up with an abstraction that no one ever extends, and a codebase full of interfaces with exactly one implementation. This is sometimes called &quot;speculative generality&quot; — one of Martin Fowler's code smells.</p>
<p>Do not pre-abstract everything on the off chance it might change someday. Instead, follow the &quot;Rule of Three&quot;: the first time you encounter a new variation, handle it inline. The second time, note the pattern. The third time, refactor to an abstraction. By the third occurrence, you have enough data to know what the actual axis of change is.</p>
<h3 id="when-the-cost-of-abstraction-exceeds-the-cost-of-modification">When the Cost of Abstraction Exceeds the Cost of Modification</h3>
<p>Every abstraction has a cost. It adds a file, an interface, a registration, and a level of indirection that the next developer must understand. If your switch statement has three cases and has not changed in two years, the OCP refactoring is not &quot;better&quot; — it is just more code.</p>
<p>Ask yourself: &quot;What is the cost of modifying this code when the next case arrives?&quot; If the answer is &quot;five minutes and a recompile,&quot; the switch statement is fine. If the answer is &quot;two hours of careful surgery in a 400-line method with 15 tests to update,&quot; it is time to refactor.</p>
<h3 id="when-you-are-doing-a-planned-refactoring">When You Are Doing a Planned Refactoring</h3>
<p>Following the OCP slavishly can prevent healthy refactoring. If you discover that your abstraction was wrong — that the interface is too broad, or the responsibilities are divided along the wrong axis — you need to modify the existing code. That is not a violation of the OCP. That is software development.</p>
<p>The OCP guides the <em>steady-state</em> evolution of a system: how you add new features to a stable codebase. It does not mean &quot;never change existing code ever again.&quot; Refactoring, fixing bugs, updating dependencies, and redesigning modules are all legitimate reasons to modify existing code.</p>
<h3 id="when-performance-matters">When Performance Matters</h3>
<p>Virtual dispatch (calling a method through an interface) has a small cost compared to a direct call. In most applications, this cost is negligible. But in hot paths — tight loops processing millions of items, real-time game physics, high-frequency trading — the overhead of abstraction can matter. In these cases, a well-optimized switch statement or even a lookup table might be the right choice.</p>
<p>Modern .NET has narrowed this gap considerably. The JIT compiler can devirtualize calls in many cases, and the performance difference between a virtual call and a direct call is often just a few nanoseconds. But if you are in a domain where nanoseconds matter, measure before abstracting.</p>
<h3 id="the-pragmatic-middle-ground">The Pragmatic Middle Ground</h3>
<p>The best developers do not follow the OCP blindly, and they do not ignore it either. They develop an intuition for when an abstraction will pay for itself and when it will not. That intuition comes from experience — from seeing which switch statements grew out of control and which ones stayed stable for years.</p>
<p>A useful mental model: think of the OCP as <em>insurance</em>. You pay a small upfront cost (the abstraction) to protect against a future cost (modifying existing code). Like real insurance, it is not worth paying for unlikely risks. But for likely risks — a payment processor that will definitely need new payment methods, a notification system that will definitely need new channels — the premium is well worth it.</p>
<h2 id="part-10-common-criticisms-and-misconceptions">Part 10: Common Criticisms and Misconceptions</h2>
<p>The OCP has its share of critics, and some of their points are valid. Let us address the most common ones.</p>
<h3 id="you-cannot-predict-the-future">&quot;You Cannot Predict the Future&quot;</h3>
<p>This is the strongest criticism. The OCP asks you to design extension points, but you can only place them where you think change will happen. If you are wrong, the extension points are useless, and the change you did not anticipate requires modifying the code anyway.</p>
<p>The counterargument is that you do not need to predict the future perfectly. You just need to observe the past. If your payment processor has had three new payment methods added in the last year, it is a safe bet that a fourth is coming. If your report generator has had exactly one format for five years, it probably does not need an abstraction.</p>
<h3 id="it-leads-to-too-many-classes">&quot;It Leads to Too Many Classes&quot;</h3>
<p>A strict application of the OCP can produce a proliferation of small classes: one interface, one implementation per case, one factory, one registration. For a system with twenty payment methods, that is at least twenty-two classes (the interface, the twenty implementations, and the service that uses them) instead of one class with a twenty-case switch.</p>
<p>This is a real trade-off. More classes means more files to navigate, more registrations to maintain, and more cognitive load for developers new to the codebase. The mitigation is to use consistent naming conventions (so the classes are predictable) and to keep each class small and focused (so they are easy to understand in isolation).</p>
<h3 id="interfaces-with-one-implementation-are-a-waste">&quot;Interfaces With One Implementation Are a Waste&quot;</h3>
<p>If you have <code>IShippingCalculator</code> and <code>ShippingCalculator</code>, and no other implementations exist or are planned, the interface is just ceremony. Some developers (and some style guides) argue that you should not introduce an interface until you need a second implementation.</p>
<p>This is a reasonable position. The counterarguments are: (1) the interface makes the class testable via mocking, even if there is only one production implementation, and (2) the interface documents the contract, making it explicit what the class promises to do. Whether those benefits justify the extra file is a judgment call.</p>
<h3 id="martins-ocp-is-not-meyers-ocp">&quot;Martin's OCP Is Not Meyer's OCP&quot;</h3>
<p>This is historically accurate. Robert C. Martin's reformulation of the OCP using interfaces and polymorphism is substantially different from Bertrand Meyer's original formulation using implementation inheritance. Some purists argue that Martin co-opted the term and changed its meaning.</p>
<p>This is an interesting debate for historians of software engineering, but it is not very useful for working developers. Both formulations share the same core insight: systems are more maintainable when new behavior can be added without modifying existing code. The mechanism differs, but the goal is identical.</p>
<h2 id="part-11-real-world-ocp-a-complete-example">Part 11: Real-World OCP — A Complete Example</h2>
<p>Let us build a complete, realistic example that ties together everything we have discussed. Imagine you are building a document export service for a SaaS application. Users can export their data in various formats, and you expect the list of formats to grow over time.</p>
<h3 id="the-domain">The Domain</h3>
<pre><code class="language-csharp">public record ExportRequest(
    string UserId,
    string DocumentId,
    string Format,
    ExportOptions Options);

public record ExportOptions(
    bool IncludeMetadata = true,
    bool IncludeComments = false,
    string? WatermarkText = null);

public record ExportResult(
    string FileName,
    string ContentType,
    byte[] Content);
</code></pre>
<h3 id="the-abstraction">The Abstraction</h3>
<pre><code class="language-csharp">public interface IDocumentExporter
{
    /// &lt;summary&gt;
    /// The format identifier this exporter handles (e.g., &quot;pdf&quot;, &quot;docx&quot;, &quot;csv&quot;).
    /// &lt;/summary&gt;
    string Format { get; }

    /// &lt;summary&gt;
    /// Exports a document in this exporter's format.
    /// &lt;/summary&gt;
    Task&lt;ExportResult&gt; ExportAsync(Document document, ExportOptions options);
}
</code></pre>
<h3 id="the-implementations">The Implementations</h3>
<pre><code class="language-csharp">public class PdfExporter : IDocumentExporter
{
    private readonly ILogger&lt;PdfExporter&gt; _logger;

    public PdfExporter(ILogger&lt;PdfExporter&gt; logger) =&gt; _logger = logger;

    public string Format =&gt; &quot;pdf&quot;;

    public async Task&lt;ExportResult&gt; ExportAsync(Document document, ExportOptions options)
    {
        _logger.LogInformation(&quot;Exporting document {DocumentId} as PDF&quot;, document.Id);

        // In a real app, you would use a library like QuestPDF or iText
        var pdfBytes = await GeneratePdfAsync(document, options);

        return new ExportResult(
            FileName: $&quot;{document.Title.ToSlug()}.pdf&quot;,
            ContentType: &quot;application/pdf&quot;,
            Content: pdfBytes);
    }

    private Task&lt;byte[]&gt; GeneratePdfAsync(Document document, ExportOptions options)
    {
        // PDF generation logic here
        // This is where QuestPDF, iText, or similar would be used
        throw new NotImplementedException(&quot;PDF generation not shown for brevity&quot;);
    }
}

public class CsvExporter : IDocumentExporter
{
    public string Format =&gt; &quot;csv&quot;;

    public Task&lt;ExportResult&gt; ExportAsync(Document document, ExportOptions options)
    {
        var sb = new StringBuilder();

        if (options.IncludeMetadata)
        {
            sb.AppendLine($&quot;# Title: {document.Title}&quot;);
            sb.AppendLine($&quot;# Author: {document.Author}&quot;);
            sb.AppendLine($&quot;# Created: {document.CreatedAt:O}&quot;);
            sb.AppendLine();
        }

        sb.AppendLine(&quot;Section,Content&quot;);
        foreach (var section in document.Sections)
        {
            var escapedContent = section.Content.Replace(&quot;\&quot;&quot;, &quot;\&quot;\&quot;&quot;);
            sb.AppendLine($&quot;\&quot;{section.Title}\&quot;,\&quot;{escapedContent}\&quot;&quot;);
        }

        var bytes = Encoding.UTF8.GetBytes(sb.ToString());

        return Task.FromResult(new ExportResult(
            FileName: $&quot;{document.Title.ToSlug()}.csv&quot;,
            ContentType: &quot;text/csv&quot;,
            Content: bytes));
    }
}

public class MarkdownExporter : IDocumentExporter
{
    public string Format =&gt; &quot;md&quot;;

    public Task&lt;ExportResult&gt; ExportAsync(Document document, ExportOptions options)
    {
        var sb = new StringBuilder();

        sb.AppendLine($&quot;# {document.Title}&quot;);
        sb.AppendLine();

        if (options.IncludeMetadata)
        {
            sb.AppendLine($&quot;*Author: {document.Author}*&quot;);
            sb.AppendLine($&quot;*Created: {document.CreatedAt:yyyy-MM-dd}*&quot;);
            sb.AppendLine();
        }

        foreach (var section in document.Sections)
        {
            sb.AppendLine($&quot;## {section.Title}&quot;);
            sb.AppendLine();
            sb.AppendLine(section.Content);
            sb.AppendLine();

            if (options.IncludeComments &amp;&amp; section.Comments.Count &gt; 0)
            {
                sb.AppendLine(&quot;### Comments&quot;);
                sb.AppendLine();
                foreach (var comment in section.Comments)
                {
                    sb.AppendLine($&quot;&gt; **{comment.Author}** ({comment.Date:yyyy-MM-dd}): {comment.Text}&quot;);
                    sb.AppendLine();
                }
            }
        }

        if (options.WatermarkText is not null)
        {
            sb.AppendLine(&quot;---&quot;);
            sb.AppendLine($&quot;*{options.WatermarkText}*&quot;);
        }

        var bytes = Encoding.UTF8.GetBytes(sb.ToString());

        return Task.FromResult(new ExportResult(
            FileName: $&quot;{document.Title.ToSlug()}.md&quot;,
            ContentType: &quot;text/markdown&quot;,
            Content: bytes));
    }
}
</code></pre>
<h3 id="the-service">The Service</h3>
<pre><code class="language-csharp">public class DocumentExportService
{
    private readonly IReadOnlyDictionary&lt;string, IDocumentExporter&gt; _exporters;
    private readonly IDocumentRepository _documents;
    private readonly ILogger&lt;DocumentExportService&gt; _logger;

    public DocumentExportService(
        IEnumerable&lt;IDocumentExporter&gt; exporters,
        IDocumentRepository documents,
        ILogger&lt;DocumentExportService&gt; logger)
    {
        _exporters = exporters.ToDictionary(
            e =&gt; e.Format,
            StringComparer.OrdinalIgnoreCase);
        _documents = documents;
        _logger = logger;
    }

    public IReadOnlyCollection&lt;string&gt; SupportedFormats =&gt; _exporters.Keys.ToList();

    public async Task&lt;ExportResult&gt; ExportAsync(ExportRequest request)
    {
        if (!_exporters.TryGetValue(request.Format, out var exporter))
        {
            throw new NotSupportedException(
                $&quot;Export format '{request.Format}' is not supported. &quot; +
                $&quot;Supported formats: {string.Join(&quot;, &quot;, SupportedFormats)}&quot;);
        }

        var document = await _documents.GetByIdAsync(request.DocumentId)
            ?? throw new InvalidOperationException(
                $&quot;Document '{request.DocumentId}' not found.&quot;);

        _logger.LogInformation(
            &quot;User {UserId} exporting document {DocumentId} as {Format}&quot;,
            request.UserId,
            request.DocumentId,
            request.Format);

        return await exporter.ExportAsync(document, request.Options);
    }
}
</code></pre>
<h3 id="the-api-endpoint">The API Endpoint</h3>
<pre><code class="language-csharp">app.MapGet(&quot;/api/export/formats&quot;, (DocumentExportService service) =&gt;
    Results.Ok(service.SupportedFormats));

app.MapPost(&quot;/api/export&quot;, async (ExportRequest request, DocumentExportService service) =&gt;
{
    var result = await service.ExportAsync(request);
    return Results.File(result.Content, result.ContentType, result.FileName);
})
.RequireAuthorization();
</code></pre>
<h3 id="the-di-registration">The DI Registration</h3>
<pre><code class="language-csharp">builder.Services.AddSingleton&lt;IDocumentExporter, PdfExporter&gt;();
builder.Services.AddSingleton&lt;IDocumentExporter, CsvExporter&gt;();
builder.Services.AddSingleton&lt;IDocumentExporter, MarkdownExporter&gt;();
builder.Services.AddScoped&lt;DocumentExportService&gt;();
</code></pre>
<h3 id="adding-a-new-format">Adding a New Format</h3>
<p>Six months from now, a customer asks for JSON export. Here is the entire change:</p>
<pre><code class="language-csharp">public class JsonExporter : IDocumentExporter
{
    public string Format =&gt; &quot;json&quot;;

    public Task&lt;ExportResult&gt; ExportAsync(Document document, ExportOptions options)
    {
        var exportData = new
        {
            document.Title,
            document.Author,
            CreatedAt = document.CreatedAt.ToString(&quot;O&quot;),
            Sections = document.Sections.Select(s =&gt; new
            {
                s.Title,
                s.Content,
                Comments = options.IncludeComments
                    ? s.Comments.Select(c =&gt; new { c.Author, c.Date, c.Text })
                    : null
            }),
            Watermark = options.WatermarkText
        };

        var json = JsonSerializer.Serialize(exportData, new JsonSerializerOptions
        {
            WriteIndented = true,
            DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
        });

        var bytes = Encoding.UTF8.GetBytes(json);

        return Task.FromResult(new ExportResult(
            FileName: $&quot;{document.Title.ToSlug()}.json&quot;,
            ContentType: &quot;application/json&quot;,
            Content: bytes));
    }
}
</code></pre>
<p>And one line in DI registration:</p>
<pre><code class="language-csharp">builder.Services.AddSingleton&lt;IDocumentExporter, JsonExporter&gt;();
</code></pre>
<p>That is it. The <code>DocumentExportService</code> was not modified. The API endpoints were not modified. The existing exporters were not modified. No existing tests were broken. The only new code is the <code>JsonExporter</code> class, its unit tests, and one line of registration.</p>
<p>This is the Open/Closed Principle at work.</p>
<h2 id="part-12-ocp-beyond-object-oriented-programming">Part 12: OCP Beyond Object-Oriented Programming</h2>
<p>The OCP is usually discussed in the context of OOP, but the underlying idea — new behavior via new code, not by modifying old code — applies to other paradigms as well.</p>
<h3 id="functional-approaches">Functional Approaches</h3>
<p>In functional programming, the OCP manifests through higher-order functions, pattern matching on discriminated unions, and composition.</p>
<pre><code class="language-csharp">// A pipeline of transformations — each function extends behavior
// without modifying the others
public static class TextPipeline
{
    public static string Process(
        string input,
        params Func&lt;string, string&gt;[] transforms)
    {
        return transforms.Aggregate(input, (current, transform) =&gt; transform(current));
    }
}

// Usage — adding a new transform is just passing another function
var result = TextPipeline.Process(
    rawText,
    text =&gt; text.Trim(),
    text =&gt; text.ToLowerInvariant(),
    text =&gt; Regex.Replace(text, @&quot;\s+&quot;, &quot; &quot;),
    text =&gt; text.Replace(&quot;colour&quot;, &quot;color&quot;) // New transformation — nothing modified
);
</code></pre>
<p>The <code>Process</code> method is closed for modification. You extend it by passing in additional functions.</p>
<h3 id="event-driven-and-message-based-systems">Event-Driven and Message-Based Systems</h3>
<p>In event-driven architectures, the OCP appears naturally. A message broker (like RabbitMQ, Azure Service Bus, or even an in-process <code>MediatR</code> pipeline) routes messages to handlers. Adding a new handler for an existing message type, or adding a handler for a new message type, does not require modifying any existing handler or the broker itself.</p>
<pre><code class="language-csharp">// MediatR example — each handler is independent
public record OrderPlacedEvent(int OrderId, string CustomerId, decimal Total)
    : INotification;

// Handler 1 — sends confirmation email
public class SendOrderConfirmationHandler
    : INotificationHandler&lt;OrderPlacedEvent&gt;
{
    public async Task Handle(
        OrderPlacedEvent notification,
        CancellationToken cancellationToken)
    {
        // Send email
    }
}

// Handler 2 — reserves inventory
public class ReserveInventoryHandler
    : INotificationHandler&lt;OrderPlacedEvent&gt;
{
    public async Task Handle(
        OrderPlacedEvent notification,
        CancellationToken cancellationToken)
    {
        // Reserve stock
    }
}

// Handler 3 — added six months later, no existing code modified
public class UpdateAnalyticsDashboardHandler
    : INotificationHandler&lt;OrderPlacedEvent&gt;
{
    public async Task Handle(
        OrderPlacedEvent notification,
        CancellationToken cancellationToken)
    {
        // Push to analytics
    }
}
</code></pre>
<h3 id="plugin-architectures">Plugin Architectures</h3>
<p>Plugin systems are, as Robert C. Martin himself wrote, the ultimate expression of the OCP. The host application defines extension points (interfaces, events, hooks), and plugins implement them. The host is closed for modification. Plugins provide extension.</p>
<p>Think of Visual Studio extensions, browser extensions, WordPress plugins, or even NuGet packages. When you install a NuGet package that adds a new middleware to your ASP.NET Core pipeline, you are experiencing the OCP. The ASP.NET Core framework did not need to be modified to support that middleware.</p>
<h2 id="part-13-a-checklist-for-applying-the-ocp">Part 13: A Checklist for Applying the OCP</h2>
<p>When you are designing a new feature or refactoring existing code, run through this checklist:</p>
<p><strong>1. Identify the axis of change.</strong> What is likely to change in this part of the system? New payment methods? New report formats? New validation rules? New notification channels? The answer tells you where to place your abstraction.</p>
<p><strong>2. Define the abstraction.</strong> Create an interface (or abstract class, or delegate) that captures the varying behavior. Keep it as small as possible — the Interface Segregation Principle is your friend here.</p>
<p><strong>3. Implement the abstraction for existing cases.</strong> Extract each case from the switch/if-else chain into its own class that implements the interface.</p>
<p><strong>4. Compose via the abstraction.</strong> The consuming class should depend only on the interface, receive implementations via dependency injection, and dispatch to the correct one.</p>
<p><strong>5. Register in DI.</strong> Wire up the implementations in your composition root (<code>Program.cs</code> in ASP.NET Core).</p>
<p><strong>6. Write tests.</strong> Test each implementation in isolation. Test the consuming class with fake implementations. Verify that adding a new implementation does not break existing tests.</p>
<p><strong>7. Resist premature abstraction.</strong> If you only have one or two cases and no clear evidence of more coming, consider waiting. The Rule of Three is your friend.</p>
<p><strong>8. Delete dead abstractions.</strong> If an interface has had one implementation for three years and there is no realistic prospect of a second, consider inlining it. Abstractions that do not earn their keep are clutter.</p>
<h2 id="part-14-resources-and-further-reading">Part 14: Resources and Further Reading</h2>
<p>Here are authoritative resources for deepening your understanding of the Open/Closed Principle and SOLID design:</p>
<ul>
<li><p><strong>Robert C. Martin, &quot;The Open-Closed Principle&quot; (1996)</strong> — The seminal article that reformulated the OCP for the age of interfaces and polymorphism. Available in Martin's book <em>Agile Software Development, Principles, Patterns, and Practices</em> (Prentice Hall, 2003).</p>
</li>
<li><p><strong>Robert C. Martin, <em>Clean Architecture: A Craftsman's Guide to Software Structure and Design</em> (2017)</strong> — Chapter 8 covers the OCP in the context of software architecture, including the concept of protecting higher-level policies from changes in lower-level details.</p>
</li>
<li><p><strong>Bertrand Meyer, <em>Object-Oriented Software Construction</em>, 2nd Edition (1997)</strong> — The original source of the OCP. The second edition (1997) is more accessible than the first (1988), though both are dense. Available from Prentice Hall.</p>
</li>
<li><p><strong>Robert C. Martin's Clean Coder Blog</strong> — Martin's post &quot;The Open-Closed Principle&quot; (May 2014) discusses plugin architectures as the &quot;apotheosis&quot; of the OCP: <a href="http://blog.cleancoder.com/uncle-bob/2014/05/12/TheOpenClosedPrinciple.html">blog.cleancoder.com/uncle-bob/2014/05/12/TheOpenClosedPrinciple.html</a></p>
</li>
<li><p><strong>Martin Fowler, <em>Refactoring: Improving the Design of Existing Code</em>, 2nd Edition (2018)</strong> — Covers &quot;Replace Conditional with Polymorphism&quot; and other refactorings that move code toward OCP compliance.</p>
</li>
<li><p><strong>The SOLID Wikipedia article</strong> — A concise overview of all five principles with references: <a href="https://en.wikipedia.org/wiki/SOLID">en.wikipedia.org/wiki/SOLID</a></p>
</li>
<li><p><strong>Microsoft's ASP.NET Core documentation on Middleware</strong> — A real-world example of OCP-compliant architecture: <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware">learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware</a></p>
</li>
<li><p><strong>Microsoft's ASP.NET Core documentation on Dependency Injection</strong> — The DI container is the mechanism that makes OCP practical in .NET: <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection">learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection</a></p>
</li>
<li><p><strong>Design Patterns: Elements of Reusable Object-Oriented Software (1994)</strong> — The Gang of Four book. Strategy, Decorator, Template Method, Observer, and Factory Method patterns are all expressions of the OCP.</p>
</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>The Open/Closed Principle is not about never modifying code. It is about designing your code so that the <em>most common kind of change</em> — adding a new variation of something that already exists — can be accomplished by writing new code rather than modifying old code.</p>
<p>The principle was born in 1988 when Bertrand Meyer observed that libraries were hard to evolve without breaking their clients. It was refined in 1996 when Robert C. Martin replaced inheritance with interfaces as the primary mechanism. And it is alive today in every ASP.NET Core middleware you write, every <code>IRepository&lt;T&gt;</code> you inject, and every strategy pattern you implement.</p>
<p>The key insight is not the technique. The technique — interfaces, dependency injection, polymorphism — is just mechanics. The key insight is the <em>question</em>: &quot;If I add a new case to this system, how much existing code do I have to change?&quot; If the answer is &quot;none,&quot; you have followed the OCP. If the answer is &quot;one file that I own and understand,&quot; you are probably fine. If the answer is &quot;twelve files across three projects,&quot; you have a design problem.</p>
<p>Build your systems like camera bodies and lenses. The body defines the mount — the interface, the extension point. Lenses (implementations) can be swapped without rewiring the body. Some photographers never buy more than two lenses, and that is fine. But when the day comes that they need a telephoto, they do not need to buy a new camera.</p>
<p>Write code that does not need to be rewritten when the next requirement arrives. That is the Open/Closed Principle. And on your next Thursday afternoon, when the product owner walks over with a new carrier, a new format, or a new payment method, you will be ready.</p>
]]></content:encoded>
      <category>csharp</category>
      <category>dotnet</category>
      <category>solid</category>
      <category>design-principles</category>
      <category>software-architecture</category>
      <category>best-practices</category>
      <category>deep-dive</category>
    </item>
    <item>
      <title>The Single Responsibility Principle: A Complete Guide for .NET Developers</title>
      <link>https://observermagazine.github.io/blog/single-responsibility-principle-complete-guide</link>
      <description>A comprehensive deep dive into the Single Responsibility Principle — from its intellectual origins in structured analysis through Robert C. Martin's evolving definitions, with extensive C# examples showing how to recognize, refactor, and sustain SRP in real-world .NET applications.</description>
      <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/single-responsibility-principle-complete-guide</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<p>The Single Responsibility Principle is the most frequently cited, most frequently misunderstood, and most frequently violated of the five SOLID principles. Ask ten developers what SRP means, and you will get at least three different answers: &quot;a class should do one thing,&quot; &quot;a class should have one reason to change,&quot; and &quot;a class should be responsible to one actor.&quot; All three of these formulations have been used at various points in the principle's history. Only the last one captures what the principle's author, Robert C. Martin, actually intended.</p>
<p>This article traces SRP from its intellectual roots in the 1970s through its final formulation in 2017. Along the way, we will look at dozens of C# code examples — from obvious violations to subtle ones — and build practical intuition for applying SRP in everyday .NET development. We will also examine the tension between SRP and pragmatism, because blindly splitting every class into the smallest possible pieces creates its own problems.</p>
<h2 id="part-1-where-the-single-responsibility-principle-came-from">Part 1: Where the Single Responsibility Principle Came From</h2>
<h3 id="cohesion-the-idea-before-the-name">Cohesion: The Idea Before the Name</h3>
<p>Long before Robert C. Martin coined the term &quot;Single Responsibility Principle,&quot; software engineers were grappling with the same underlying concept under a different name: <strong>cohesion</strong>.</p>
<p>In 1978, Tom DeMarco published <em>Structured Analysis and System Specification</em>, a book about decomposing systems into modules using data flow diagrams. DeMarco argued that a well-designed module should have a clear, focused purpose. When a module's internal elements were all related to the same concern, DeMarco called it &quot;cohesive.&quot; When a module mixed unrelated concerns, it was said to have low cohesion — and low cohesion led to fragile, hard-to-change systems.</p>
<p>Around the same time, Meilir Page-Jones wrote <em>The Practical Guide to Structured Systems Design</em> (1980), which formalized a spectrum of cohesion types ranging from &quot;coincidental cohesion&quot; (the worst — elements thrown together for no reason) through &quot;functional cohesion&quot; (the best — every element contributes to a single, well-defined task).</p>
<p>Larry Constantine and Edward Yourdon had introduced these ideas even earlier in <em>Structured Design</em> (1975), identifying seven levels of cohesion. The insight was always the same: modules that group related things together are easier to understand, easier to test, and easier to change.</p>
<h3 id="robert-c.martin-and-the-birth-of-srp">Robert C. Martin and the Birth of SRP</h3>
<p>Robert C. Martin — widely known as &quot;Uncle Bob&quot; — synthesized these ideas into a single, memorable principle in the late 1990s. He introduced the term &quot;Single Responsibility Principle&quot; in his article <em>The Principles of OOD</em> and later included it as the first of the five SOLID principles in his 2003 book <em>Agile Software Development, Principles, Patterns, and Practices</em>.</p>
<p>Martin's original formulation was:</p>
<blockquote>
<p>A class should have only one reason to change.</p>
</blockquote>
<p>This was elegant and quotable, but it turned out to be ambiguous. What counts as a &quot;reason to change&quot;? Is a bug fix a reason to change? Is a refactoring a reason to change? Is a new business requirement a reason to change? Developers argued endlessly about where to draw the line.</p>
<h3 id="the-2014-clarification">The 2014 Clarification</h3>
<p>In May 2014, Martin published a blog post titled &quot;The Single Responsibility Principle&quot; on his Clean Coder blog. In it, he acknowledged the confusion around &quot;reason to change&quot; and tried to clarify. The key insight was that &quot;reasons to change&quot; map to <strong>people</strong> — specifically, to the different stakeholders or user groups whose needs drive changes to the software.</p>
<p>Martin used the example of an <code>Employee</code> class with three methods: <code>calculatePay()</code>, <code>reportHours()</code>, and <code>save()</code>. Each method serves a different stakeholder: the CFO's organization cares about pay calculation, the COO's organization cares about hour reporting, and the CTO's organization cares about database persistence. Three stakeholders, three reasons to change — and therefore three responsibilities that should live in separate classes or modules.</p>
<p>He also offered an alternative phrasing: &quot;Gather together the things that change for the same reasons. Separate those things that change for different reasons.&quot; This is really just another way of describing cohesion and coupling — maximize cohesion within a module, minimize coupling between modules.</p>
<h3 id="the-final-definition-in-clean-architecture">The Final Definition in Clean Architecture</h3>
<p>In his 2017 book <em>Clean Architecture: A Craftsman's Guide to Software Structure and Design</em>, Martin gave what he considers the definitive formulation of SRP:</p>
<blockquote>
<p>A module should be responsible to one, and only one, actor.</p>
</blockquote>
<p>Here, &quot;module&quot; means a source file (or, in object-oriented languages, a class). And &quot;actor&quot; means a group of stakeholders or users who want the system to change in the same way. This is the most precise version of the principle because it eliminates the ambiguity of &quot;reason to change&quot; — it is not about the number of methods, or the number of lines of code, or even the number of conceptual &quot;things&quot; a class does. It is about the number of different groups of people who might ask you to change that class.</p>
<p>This matters because when two different actors drive changes to the same module, those changes can collide. A change requested by the accounting department might accidentally break something the operations department depends on. SRP exists to prevent that collision.</p>
<h2 id="part-2-what-srp-is-not">Part 2: What SRP Is Not</h2>
<p>Before we go further, let us clear up the most common misconceptions. These misunderstandings cause real harm — they lead developers to either ignore the principle entirely or apply it so aggressively that their codebase becomes an unnavigable sea of tiny classes.</p>
<h3 id="misconception-1-a-class-should-do-only-one-thing">Misconception 1: &quot;A Class Should Do Only One Thing&quot;</h3>
<p>This is the most widespread misunderstanding. It reduces SRP to a vague platitude: what counts as &quot;one thing&quot;? A <code>UserService</code> that creates users, validates them, and sends welcome emails — is that one thing (&quot;user management&quot;) or three things? A <code>StringBuilder</code> that appends characters, inserts strings, and converts to output — is that one thing or many?</p>
<p>The &quot;do one thing&quot; interpretation leads to two failure modes. Developers who interpret &quot;one thing&quot; broadly end up with God classes that do everything related to a concept. Developers who interpret &quot;one thing&quot; narrowly end up with anemic classes that each contain a single method and accomplish nothing on their own.</p>
<p>SRP is not about the number of things a class does. It is about the number of actors it serves. A <code>StringBuilder</code> does many things, but they all serve the same actor — the developer who needs to build strings. There is no scenario where the accounting department wants <code>StringBuilder.Append()</code> to work differently than the operations department does. One actor, one responsibility, no violation.</p>
<h3 id="misconception-2-a-class-should-have-only-one-method">Misconception 2: &quot;A Class Should Have Only One Method&quot;</h3>
<p>This is the extreme version of misconception one. Some developers, upon learning SRP, immediately start breaking every class into single-method classes. This is not what the principle asks for. A class can have dozens of methods and still follow SRP, as long as all those methods serve the same actor's needs.</p>
<p>Consider the .NET <code>List&lt;T&gt;</code> class. It has methods for adding, removing, sorting, searching, enumerating, copying, reversing, and converting. That is a lot of methods. But they all serve the same purpose — managing an in-memory collection — and they all change for the same reasons. Nobody from the sales department is going to ask you to change how <code>List&lt;T&gt;.Sort()</code> works while someone from the warehouse team asks you to change how <code>List&lt;T&gt;.Add()</code> works. One actor, one responsibility.</p>
<h3 id="misconception-3-srp-means-small-classes">Misconception 3: &quot;SRP Means Small Classes&quot;</h3>
<p>Class size is a consequence of good design, not a goal in itself. Sometimes following SRP produces small classes. Sometimes it produces large ones. A well-designed repository class might have twenty methods — one for each query the application needs — and still follow SRP if all those queries serve the same actor.</p>
<p>The danger of fetishizing small classes is that it leads to <strong>class explosion</strong> — a codebase with hundreds of tiny classes, each containing a single method, connected by a web of interfaces and dependency injection registrations. This kind of codebase is hard to navigate, hard to understand, and hard to change — the exact problems SRP was supposed to solve.</p>
<h3 id="misconception-4-srp-only-applies-to-classes">Misconception 4: &quot;SRP Only Applies to Classes&quot;</h3>
<p>Martin's final formulation uses the word &quot;module,&quot; which he clarifies to mean a source file. But the principle applies at every level of abstraction: methods, classes, namespaces, assemblies, services, and even entire systems. A microservice that handles both user authentication and order processing is violating SRP at the service level, just as surely as a class that mixes business logic and database access violates it at the class level.</p>
<p>In fact, some of the most impactful SRP violations occur at the architectural level. We will explore this in Part 10.</p>
<h2 id="part-3-recognizing-srp-violations-in-c-code">Part 3: Recognizing SRP Violations in C# Code</h2>
<p>Now let us get practical. How do you spot SRP violations in a real codebase? Here are the most reliable indicators.</p>
<h3 id="indicator-1-the-class-has-multiple-reasons-to-change">Indicator 1: The Class Has Multiple Reasons to Change</h3>
<p>This is the classic test. Look at a class and ask: &quot;What might cause me to change this class?&quot; If you can identify multiple independent axes of change, you have a likely SRP violation.</p>
<pre><code class="language-csharp">public class InvoiceService
{
    private readonly IDbConnection _db;
    private readonly IEmailSender _email;

    public InvoiceService(IDbConnection db, IEmailSender email)
    {
        _db = db;
        _email = email;
    }

    public Invoice CreateInvoice(Order order)
    {
        // Business logic: calculate line items, apply tax rules, compute totals
        var invoice = new Invoice
        {
            OrderId = order.Id,
            LineItems = order.Items.Select(i =&gt; new InvoiceLineItem
            {
                Description = i.ProductName,
                Quantity = i.Quantity,
                UnitPrice = i.UnitPrice,
                Total = i.Quantity * i.UnitPrice
            }).ToList()
        };

        invoice.Subtotal = invoice.LineItems.Sum(li =&gt; li.Total);
        invoice.Tax = invoice.Subtotal * 0.08m; // Tax rate
        invoice.Total = invoice.Subtotal + invoice.Tax;

        return invoice;
    }

    public void SaveInvoice(Invoice invoice)
    {
        // Persistence logic: insert into database
        _db.Execute(
            &quot;INSERT INTO Invoices (OrderId, Subtotal, Tax, Total) VALUES (@OrderId, @Subtotal, @Tax, @Total)&quot;,
            invoice);

        foreach (var lineItem in invoice.LineItems)
        {
            _db.Execute(
                &quot;INSERT INTO InvoiceLineItems (InvoiceId, Description, Quantity, UnitPrice, Total) VALUES (@InvoiceId, @Description, @Quantity, @UnitPrice, @Total)&quot;,
                new { InvoiceId = invoice.Id, lineItem.Description, lineItem.Quantity, lineItem.UnitPrice, lineItem.Total });
        }
    }

    public void SendInvoiceEmail(Invoice invoice, string recipientEmail)
    {
        // Presentation logic: format the invoice as HTML for email
        var html = $&quot;&quot;&quot;
            &lt;h1&gt;Invoice #{invoice.Id}&lt;/h1&gt;
            &lt;table&gt;
                &lt;tr&gt;&lt;th&gt;Item&lt;/th&gt;&lt;th&gt;Qty&lt;/th&gt;&lt;th&gt;Price&lt;/th&gt;&lt;th&gt;Total&lt;/th&gt;&lt;/tr&gt;
                {string.Join(&quot;&quot;, invoice.LineItems.Select(li =&gt;
                    $&quot;&lt;tr&gt;&lt;td&gt;{li.Description}&lt;/td&gt;&lt;td&gt;{li.Quantity}&lt;/td&gt;&lt;td&gt;{li.UnitPrice:C}&lt;/td&gt;&lt;td&gt;{li.Total:C}&lt;/td&gt;&lt;/tr&gt;&quot;))}
            &lt;/table&gt;
            &lt;p&gt;&lt;strong&gt;Subtotal:&lt;/strong&gt; {invoice.Subtotal:C}&lt;/p&gt;
            &lt;p&gt;&lt;strong&gt;Tax:&lt;/strong&gt; {invoice.Tax:C}&lt;/p&gt;
            &lt;p&gt;&lt;strong&gt;Total:&lt;/strong&gt; {invoice.Total:C}&lt;/p&gt;
            &quot;&quot;&quot;;

        _email.Send(recipientEmail, $&quot;Invoice #{invoice.Id}&quot;, html);
    }
}
</code></pre>
<p>This class has three independent axes of change. The accounting team might ask you to change how tax is calculated. The DBA might ask you to change the database schema. The marketing team might ask you to change how the invoice email looks. Three actors, three responsibilities, one class — a clear SRP violation.</p>
<h3 id="indicator-2-unrelated-dependencies-in-the-constructor">Indicator 2: Unrelated Dependencies in the Constructor</h3>
<p>When a class's constructor requires a grab-bag of unrelated dependencies, that is a strong signal. The <code>InvoiceService</code> above depends on both <code>IDbConnection</code> (persistence infrastructure) and <code>IEmailSender</code> (communication infrastructure). These have nothing to do with each other.</p>
<p>A useful heuristic: if you can draw a line through your constructor parameters that divides them into two groups with no relationship, you probably have two responsibilities.</p>
<h3 id="indicator-3-methods-that-do-not-use-the-same-fields">Indicator 3: Methods That Do Not Use the Same Fields</h3>
<p>In a well-designed class, most methods operate on the same internal state. When you see methods that use completely disjoint sets of fields or dependencies, those methods probably belong in separate classes.</p>
<pre><code class="language-csharp">public class ReportGenerator
{
    private readonly IDbConnection _db;       // Used by data methods
    private readonly IPdfRenderer _renderer;   // Used by rendering methods
    private readonly IFileStorage _storage;    // Used by storage methods

    public DataTable FetchReportData(DateTime from, DateTime to)
    {
        // Uses _db only
        return _db.QueryDataTable(&quot;SELECT * FROM Sales WHERE Date BETWEEN @from AND @to&quot;,
            new { from, to });
    }

    public byte[] RenderToPdf(DataTable data, string title)
    {
        // Uses _renderer only
        return _renderer.Render(data, title);
    }

    public void SaveReport(byte[] pdf, string fileName)
    {
        // Uses _storage only
        _storage.Upload(pdf, fileName);
    }
}
</code></pre>
<p>Each method uses exactly one dependency and ignores the others. This is a sign that <code>ReportGenerator</code> is really three classes wearing a trench coat.</p>
<h3 id="indicator-4-the-god-class">Indicator 4: The God Class</h3>
<p>Sometimes the violation is not subtle at all. You open a file and it is 3,000 lines long, with fifty methods, twenty fields, and a name like <code>ApplicationManager</code> or <code>Utilities</code> or <code>Helper</code>. This is the God Class — a class that has accumulated every responsibility nobody knew where else to put.</p>
<p>God classes are the ultimate SRP violation, but they are also the easiest to recognize. The harder violations are the ones that look reasonable at first glance.</p>
<h3 id="indicator-5-merge-conflicts-in-the-same-file">Indicator 5: Merge Conflicts in the Same File</h3>
<p>This is a process-level indicator. If two developers working on unrelated features keep getting merge conflicts in the same file, that file probably has multiple responsibilities. Developer A is changing the tax calculation logic while Developer B is changing the email template, and they are both editing <code>InvoiceService.cs</code>. This is exactly the collision that SRP is designed to prevent.</p>
<h2 id="part-4-refactoring-toward-srp-a-step-by-step-example">Part 4: Refactoring Toward SRP — A Step-by-Step Example</h2>
<p>Let us take the <code>InvoiceService</code> from Part 3 and refactor it properly. The goal is not to create the maximum number of classes — it is to separate the responsibilities along actor boundaries.</p>
<h3 id="step-1-identify-the-actors">Step 1: Identify the Actors</h3>
<p>Who are the stakeholders for this code?</p>
<ol>
<li><strong>The finance team</strong> cares about how invoices are calculated — tax rules, discounts, rounding behavior.</li>
<li><strong>The infrastructure team</strong> (or DBA) cares about how invoices are stored — database schema, query performance, transactions.</li>
<li><strong>The communications team</strong> (or marketing) cares about how invoices are presented — email templates, formatting, branding.</li>
</ol>
<p>Three actors, three classes.</p>
<h3 id="step-2-extract-the-business-logic">Step 2: Extract the Business Logic</h3>
<pre><code class="language-csharp">public class InvoiceCalculator
{
    private readonly TaxRateProvider _taxRateProvider;

    public InvoiceCalculator(TaxRateProvider taxRateProvider)
    {
        _taxRateProvider = taxRateProvider;
    }

    public Invoice CreateInvoice(Order order)
    {
        var invoice = new Invoice
        {
            OrderId = order.Id,
            LineItems = order.Items.Select(i =&gt; new InvoiceLineItem
            {
                Description = i.ProductName,
                Quantity = i.Quantity,
                UnitPrice = i.UnitPrice,
                Total = i.Quantity * i.UnitPrice
            }).ToList()
        };

        invoice.Subtotal = invoice.LineItems.Sum(li =&gt; li.Total);
        invoice.Tax = invoice.Subtotal * _taxRateProvider.GetRate(order.ShippingAddress);
        invoice.Total = invoice.Subtotal + invoice.Tax;

        return invoice;
    }
}
</code></pre>
<p>This class has one actor: the finance team. The only reason to change it is if the business rules for calculating invoices change.</p>
<p>Notice that we also extracted the hard-coded tax rate into a <code>TaxRateProvider</code>. The magic number <code>0.08m</code> was a code smell — it mixed configuration with logic. Now the tax rate can vary by jurisdiction without touching the calculator.</p>
<h3 id="step-3-extract-the-persistence-logic">Step 3: Extract the Persistence Logic</h3>
<pre><code class="language-csharp">public class InvoiceRepository
{
    private readonly IDbConnection _db;

    public InvoiceRepository(IDbConnection db)
    {
        _db = db;
    }

    public void Save(Invoice invoice)
    {
        using var transaction = _db.BeginTransaction();
        try
        {
            _db.Execute(
                &quot;&quot;&quot;
                INSERT INTO Invoices (OrderId, Subtotal, Tax, Total, CreatedAt)
                VALUES (@OrderId, @Subtotal, @Tax, @Total, @CreatedAt)
                &quot;&quot;&quot;,
                new { invoice.OrderId, invoice.Subtotal, invoice.Tax, invoice.Total, CreatedAt = DateTime.UtcNow },
                transaction);

            var invoiceId = _db.QuerySingle&lt;int&gt;(&quot;SELECT SCOPE_IDENTITY()&quot;, transaction: transaction);

            foreach (var lineItem in invoice.LineItems)
            {
                _db.Execute(
                    &quot;&quot;&quot;
                    INSERT INTO InvoiceLineItems (InvoiceId, Description, Quantity, UnitPrice, Total)
                    VALUES (@InvoiceId, @Description, @Quantity, @UnitPrice, @Total)
                    &quot;&quot;&quot;,
                    new { InvoiceId = invoiceId, lineItem.Description, lineItem.Quantity, lineItem.UnitPrice, lineItem.Total },
                    transaction);
            }

            transaction.Commit();
        }
        catch
        {
            transaction.Rollback();
            throw;
        }
    }

    public Invoice? GetById(int id)
    {
        return _db.QuerySingleOrDefault&lt;Invoice&gt;(
            &quot;SELECT * FROM Invoices WHERE Id = @Id&quot;, new { Id = id });
    }
}
</code></pre>
<p>This class has one actor: the infrastructure team. The only reason to change it is if the database schema changes or if you need to optimize queries.</p>
<p>Notice we also added a transaction — something the original <code>InvoiceService</code> was missing. When responsibilities are separated, it becomes easier to get the details right for each one.</p>
<h3 id="step-4-extract-the-presentation-logic">Step 4: Extract the Presentation Logic</h3>
<pre><code class="language-csharp">public class InvoiceEmailSender
{
    private readonly IEmailSender _email;

    public InvoiceEmailSender(IEmailSender email)
    {
        _email = email;
    }

    public async Task SendAsync(Invoice invoice, string recipientEmail)
    {
        var html = BuildEmailHtml(invoice);
        await _email.SendAsync(recipientEmail, $&quot;Invoice #{invoice.Id}&quot;, html);
    }

    private static string BuildEmailHtml(Invoice invoice)
    {
        var rows = string.Join(&quot;&quot;, invoice.LineItems.Select(li =&gt;
            $&quot;&lt;tr&gt;&lt;td&gt;{li.Description}&lt;/td&gt;&lt;td&gt;{li.Quantity}&lt;/td&gt;&lt;td&gt;{li.UnitPrice:C}&lt;/td&gt;&lt;td&gt;{li.Total:C}&lt;/td&gt;&lt;/tr&gt;&quot;));

        return $&quot;&quot;&quot;
            &lt;!DOCTYPE html&gt;
            &lt;html&gt;
            &lt;body style=&quot;font-family: Arial, sans-serif;&quot;&gt;
                &lt;h1&gt;Invoice #{invoice.Id}&lt;/h1&gt;
                &lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot;&gt;
                    &lt;thead&gt;
                        &lt;tr&gt;&lt;th&gt;Item&lt;/th&gt;&lt;th&gt;Qty&lt;/th&gt;&lt;th&gt;Price&lt;/th&gt;&lt;th&gt;Total&lt;/th&gt;&lt;/tr&gt;
                    &lt;/thead&gt;
                    &lt;tbody&gt;{rows}&lt;/tbody&gt;
                &lt;/table&gt;
                &lt;p&gt;&lt;strong&gt;Subtotal:&lt;/strong&gt; {invoice.Subtotal:C}&lt;/p&gt;
                &lt;p&gt;&lt;strong&gt;Tax:&lt;/strong&gt; {invoice.Tax:C}&lt;/p&gt;
                &lt;p&gt;&lt;strong&gt;Total:&lt;/strong&gt; {invoice.Total:C}&lt;/p&gt;
            &lt;/body&gt;
            &lt;/html&gt;
            &quot;&quot;&quot;;
    }
}
</code></pre>
<p>One actor: the communications/marketing team. The only reason to change this class is if the email format or branding changes.</p>
<h3 id="step-5-compose-them-together">Step 5: Compose Them Together</h3>
<p>Now we need something to orchestrate these three classes. This is a legitimate responsibility of its own — coordinating the workflow of creating, saving, and sending an invoice.</p>
<pre><code class="language-csharp">public class InvoiceWorkflow
{
    private readonly InvoiceCalculator _calculator;
    private readonly InvoiceRepository _repository;
    private readonly InvoiceEmailSender _emailSender;
    private readonly ILogger&lt;InvoiceWorkflow&gt; _logger;

    public InvoiceWorkflow(
        InvoiceCalculator calculator,
        InvoiceRepository repository,
        InvoiceEmailSender emailSender,
        ILogger&lt;InvoiceWorkflow&gt; logger)
    {
        _calculator = calculator;
        _repository = repository;
        _emailSender = emailSender;
        _logger = logger;
    }

    public async Task ProcessOrderAsync(Order order, string customerEmail)
    {
        _logger.LogInformation(&quot;Creating invoice for order {OrderId}&quot;, order.Id);

        var invoice = _calculator.CreateInvoice(order);
        _repository.Save(invoice);

        _logger.LogInformation(&quot;Invoice {InvoiceId} saved for order {OrderId}&quot;, invoice.Id, order.Id);

        await _emailSender.SendAsync(invoice, customerEmail);

        _logger.LogInformation(&quot;Invoice email sent to {Email}&quot;, customerEmail);
    }
}
</code></pre>
<p>Is this class violating SRP? It depends on three other classes, after all. But look at what it <em>does</em> — it simply calls the three collaborators in sequence. It contains no business logic, no persistence logic, and no presentation logic. Its single responsibility is <em>orchestration</em>, and it serves a single actor: whoever owns the business process of invoicing. If the sequence of steps changes (maybe invoices need approval before sending), this is the only class that changes.</p>
<h3 id="the-result">The Result</h3>
<p>We went from one class with three responsibilities to four classes, each with one:</p>
<table>
<thead>
<tr>
<th>Class</th>
<th>Responsibility</th>
<th>Actor</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>InvoiceCalculator</code></td>
<td>Business rules for invoice calculation</td>
<td>Finance team</td>
</tr>
<tr>
<td><code>InvoiceRepository</code></td>
<td>Database persistence</td>
<td>Infrastructure/DBA</td>
</tr>
<tr>
<td><code>InvoiceEmailSender</code></td>
<td>Email formatting and delivery</td>
<td>Marketing/Communications</td>
</tr>
<tr>
<td><code>InvoiceWorkflow</code></td>
<td>Process orchestration</td>
<td>Business process owner</td>
</tr>
</tbody>
</table>
<p>Each class can change independently. The finance team can add discount logic to <code>InvoiceCalculator</code> without touching the email template. The DBA can migrate from SQL Server to PostgreSQL by changing only <code>InvoiceRepository</code>. The marketing team can redesign the email in <code>InvoiceEmailSender</code> without risking a broken tax calculation.</p>
<h2 id="part-5-srp-at-the-method-level">Part 5: SRP at the Method Level</h2>
<p>SRP does not only apply to classes. It applies to methods too, and this is often where the most impactful improvements can be made.</p>
<h3 id="the-and-test">The &quot;And&quot; Test</h3>
<p>Read the name of a method. If you have to use the word &quot;and&quot; to describe what it does, it probably has multiple responsibilities.</p>
<pre><code class="language-csharp">// Bad: this method validates AND saves AND notifies
public async Task ValidateAndSaveAndNotifyAsync(User user)
{
    // Validation
    if (string.IsNullOrWhiteSpace(user.Email))
        throw new ValidationException(&quot;Email is required&quot;);
    if (user.Email.Length &gt; 255)
        throw new ValidationException(&quot;Email too long&quot;);
    if (!user.Email.Contains('@'))
        throw new ValidationException(&quot;Invalid email format&quot;);

    // Persistence
    await _db.ExecuteAsync(&quot;INSERT INTO Users (Email, Name) VALUES (@Email, @Name)&quot;, user);

    // Notification
    await _emailSender.SendAsync(user.Email, &quot;Welcome!&quot;, &quot;Thanks for signing up!&quot;);
}
</code></pre>
<p>Better:</p>
<pre><code class="language-csharp">public async Task RegisterUserAsync(User user)
{
    ValidateUser(user);
    await SaveUserAsync(user);
    await SendWelcomeEmailAsync(user);
}

private static void ValidateUser(User user)
{
    if (string.IsNullOrWhiteSpace(user.Email))
        throw new ValidationException(&quot;Email is required&quot;);
    if (user.Email.Length &gt; 255)
        throw new ValidationException(&quot;Email too long&quot;);
    if (!user.Email.Contains('@'))
        throw new ValidationException(&quot;Invalid email format&quot;);
}

private async Task SaveUserAsync(User user)
{
    await _db.ExecuteAsync(&quot;INSERT INTO Users (Email, Name) VALUES (@Email, @Name)&quot;, user);
}

private async Task SendWelcomeEmailAsync(User user)
{
    await _emailSender.SendAsync(user.Email, &quot;Welcome!&quot;, &quot;Thanks for signing up!&quot;);
}
</code></pre>
<p>Each private method does one thing. The public method composes them. The code reads like a story.</p>
<h3 id="the-abstraction-level-test">The Abstraction Level Test</h3>
<p>A method should operate at a single level of abstraction. When a method mixes high-level orchestration with low-level details, it becomes harder to understand and harder to change.</p>
<pre><code class="language-csharp">// Bad: mixes high-level workflow with low-level string manipulation
public async Task&lt;string&gt; GenerateReportAsync(int year, int quarter)
{
    var data = await _repository.GetSalesDataAsync(year, quarter);

    // Suddenly we're doing low-level CSV formatting
    var sb = new StringBuilder();
    sb.AppendLine(&quot;Product,Revenue,Units,AvgPrice&quot;);
    foreach (var row in data)
    {
        sb.Append(row.Product.Replace(&quot;,&quot;, &quot;\\,&quot;));
        sb.Append(',');
        sb.Append(row.Revenue.ToString(&quot;F2&quot;, CultureInfo.InvariantCulture));
        sb.Append(',');
        sb.Append(row.Units);
        sb.Append(',');
        sb.AppendLine((row.Revenue / row.Units).ToString(&quot;F2&quot;, CultureInfo.InvariantCulture));
    }

    var fileName = $&quot;sales-{year}-Q{quarter}.csv&quot;;
    await _storage.UploadAsync(fileName, Encoding.UTF8.GetBytes(sb.ToString()));

    return fileName;
}
</code></pre>
<p>Better:</p>
<pre><code class="language-csharp">public async Task&lt;string&gt; GenerateReportAsync(int year, int quarter)
{
    var data = await _repository.GetSalesDataAsync(year, quarter);
    var csv = FormatAsCsv(data);
    var fileName = $&quot;sales-{year}-Q{quarter}.csv&quot;;
    await _storage.UploadAsync(fileName, Encoding.UTF8.GetBytes(csv));
    return fileName;
}

private static string FormatAsCsv(IReadOnlyList&lt;SalesRow&gt; data)
{
    var sb = new StringBuilder();
    sb.AppendLine(&quot;Product,Revenue,Units,AvgPrice&quot;);
    foreach (var row in data)
    {
        sb.Append(EscapeCsvField(row.Product));
        sb.Append(',');
        sb.Append(row.Revenue.ToString(&quot;F2&quot;, CultureInfo.InvariantCulture));
        sb.Append(',');
        sb.Append(row.Units);
        sb.Append(',');
        sb.AppendLine((row.Revenue / row.Units).ToString(&quot;F2&quot;, CultureInfo.InvariantCulture));
    }
    return sb.ToString();
}

private static string EscapeCsvField(string field)
{
    if (field.Contains(',') || field.Contains('&quot;') || field.Contains('\n'))
        return $&quot;\&quot;{field.Replace(&quot;\&quot;&quot;, &quot;\&quot;\&quot;&quot;)}\&quot;&quot;;
    return field;
}
</code></pre>
<p>Now the public method reads at one level of abstraction — fetch, format, upload, return — and the details are pushed into focused helper methods.</p>
<h2 id="part-6-srp-in-asp.net-core-controllers-services-and-middleware">Part 6: SRP in ASP.NET Core — Controllers, Services, and Middleware</h2>
<p>ASP.NET Core gives you a layered architecture out of the box: controllers (or minimal API endpoints) handle HTTP, services handle business logic, and middleware handles cross-cutting concerns. This layering naturally supports SRP — if you use it correctly.</p>
<h3 id="fat-controllers-the-most-common-asp.net-srp-violation">Fat Controllers: The Most Common ASP.NET SRP Violation</h3>
<p>A &quot;fat controller&quot; is a controller that contains business logic, validation, database access, and HTTP response formatting all in one action method. This is extremely common, especially in tutorials and quick prototypes that never get cleaned up.</p>
<pre><code class="language-csharp">// Bad: fat controller action
[HttpPost(&quot;orders&quot;)]
public async Task&lt;IActionResult&gt; CreateOrder([FromBody] CreateOrderRequest request)
{
    // Validation
    if (request.Items == null || request.Items.Count == 0)
        return BadRequest(&quot;Order must have at least one item&quot;);

    foreach (var item in request.Items)
    {
        if (item.Quantity &lt;= 0)
            return BadRequest($&quot;Invalid quantity for {item.ProductId}&quot;);
    }

    // Business logic: check inventory
    foreach (var item in request.Items)
    {
        var product = await _db.Products.FindAsync(item.ProductId);
        if (product == null)
            return NotFound($&quot;Product {item.ProductId} not found&quot;);
        if (product.Stock &lt; item.Quantity)
            return Conflict($&quot;Insufficient stock for {product.Name}&quot;);
    }

    // More business logic: calculate total
    decimal total = 0;
    var orderItems = new List&lt;OrderItem&gt;();
    foreach (var item in request.Items)
    {
        var product = await _db.Products.FindAsync(item.ProductId);
        var orderItem = new OrderItem
        {
            ProductId = item.ProductId,
            Quantity = item.Quantity,
            UnitPrice = product!.Price,
            Total = item.Quantity * product.Price
        };
        orderItems.Add(orderItem);
        total += orderItem.Total;

        // Side effect: decrement stock
        product.Stock -= item.Quantity;
    }

    // Persistence
    var order = new Order
    {
        CustomerId = request.CustomerId,
        Items = orderItems,
        Total = total,
        CreatedAt = DateTime.UtcNow
    };
    _db.Orders.Add(order);
    await _db.SaveChangesAsync();

    // Notification
    await _emailSender.SendAsync(request.CustomerEmail,
        &quot;Order Confirmation&quot;,
        $&quot;Your order #{order.Id} for {total:C} has been placed.&quot;);

    return CreatedAtAction(nameof(GetOrder), new { id = order.Id }, order);
}
</code></pre>
<p>This single action method handles: HTTP request validation, business rule validation (inventory check), price calculation, stock management, database persistence, email notification, and HTTP response formatting. That is at least five responsibilities.</p>
<h3 id="the-refactored-version">The Refactored Version</h3>
<pre><code class="language-csharp">// Controller: only HTTP concerns
[HttpPost(&quot;orders&quot;)]
public async Task&lt;IActionResult&gt; CreateOrder([FromBody] CreateOrderRequest request)
{
    var result = await _orderService.PlaceOrderAsync(request);

    return result.Match&lt;IActionResult&gt;(
        success: order =&gt; CreatedAtAction(nameof(GetOrder), new { id = order.Id }, order),
        validationError: errors =&gt; BadRequest(errors),
        notFound: message =&gt; NotFound(message),
        conflict: message =&gt; Conflict(message));
}
</code></pre>
<pre><code class="language-csharp">// Service: business logic orchestration
public class OrderService
{
    private readonly IOrderValidator _validator;
    private readonly IInventoryService _inventory;
    private readonly IPricingService _pricing;
    private readonly IOrderRepository _repository;
    private readonly IOrderNotifier _notifier;
    private readonly ILogger&lt;OrderService&gt; _logger;

    public OrderService(
        IOrderValidator validator,
        IInventoryService inventory,
        IPricingService pricing,
        IOrderRepository repository,
        IOrderNotifier notifier,
        ILogger&lt;OrderService&gt; logger)
    {
        _validator = validator;
        _inventory = inventory;
        _pricing = pricing;
        _repository = repository;
        _notifier = notifier;
        _logger = logger;
    }

    public async Task&lt;OrderResult&gt; PlaceOrderAsync(CreateOrderRequest request)
    {
        var validationResult = _validator.Validate(request);
        if (!validationResult.IsValid)
            return OrderResult.ValidationError(validationResult.Errors);

        var availabilityResult = await _inventory.CheckAvailabilityAsync(request.Items);
        if (!availabilityResult.IsAvailable)
            return OrderResult.Conflict(availabilityResult.Message);

        var pricedItems = await _pricing.CalculateAsync(request.Items);
        var order = await _repository.CreateAsync(request.CustomerId, pricedItems);
        await _inventory.ReserveStockAsync(order.Items);

        _logger.LogInformation(&quot;Order {OrderId} placed for customer {CustomerId}&quot;,
            order.Id, request.CustomerId);

        // Fire-and-forget notification (or use a message queue)
        _ = _notifier.SendConfirmationAsync(order, request.CustomerEmail);

        return OrderResult.Success(order);
    }
}
</code></pre>
<p>Now the controller knows nothing about business rules. The <code>OrderService</code> orchestrates the workflow but delegates each responsibility to a focused collaborator. The validator, inventory service, pricing service, repository, and notifier each have a single responsibility.</p>
<h3 id="minimal-apis-and-srp">Minimal APIs and SRP</h3>
<p>With .NET minimal APIs, the temptation to put everything in a lambda is even stronger:</p>
<pre><code class="language-csharp">// Bad: everything in a lambda
app.MapPost(&quot;/orders&quot;, async (CreateOrderRequest request, AppDbContext db, IEmailSender email) =&gt;
{
    // 50 lines of mixed concerns...
});
</code></pre>
<p>The fix is the same — extract a service:</p>
<pre><code class="language-csharp">app.MapPost(&quot;/orders&quot;, async (CreateOrderRequest request, OrderService service) =&gt;
{
    var result = await service.PlaceOrderAsync(request);
    return result.Match(
        success: order =&gt; Results.Created($&quot;/orders/{order.Id}&quot;, order),
        validationError: errors =&gt; Results.BadRequest(errors),
        notFound: message =&gt; Results.NotFound(message),
        conflict: message =&gt; Results.Conflict(message));
});
</code></pre>
<h3 id="middleware-and-cross-cutting-concerns">Middleware and Cross-Cutting Concerns</h3>
<p>ASP.NET Core middleware is a natural home for cross-cutting concerns that should not leak into controllers or services. Each middleware should handle exactly one concern:</p>
<pre><code class="language-csharp">// Good: each middleware has a single responsibility
app.UseExceptionHandler(&quot;/error&quot;);   // Error handling
app.UseHttpsRedirection();           // Transport security
app.UseAuthentication();             // Identity verification
app.UseAuthorization();              // Access control
app.UseRateLimiting();               // Traffic management
app.UseResponseCaching();            // Performance optimization
</code></pre>
<p>If you find yourself writing a single middleware that handles both logging and authentication, split it in two. The middleware pipeline is designed for composition.</p>
<h2 id="part-7-srp-and-dependency-injection">Part 7: SRP and Dependency Injection</h2>
<p>Dependency injection and SRP are natural allies. When each class has a single responsibility, its dependencies are few, focused, and easy to mock. When SRP is violated, dependencies multiply and testing becomes painful.</p>
<h3 id="the-constructor-over-injection-smell">The Constructor Over-Injection Smell</h3>
<p>If a class requires more than four or five constructor dependencies, that is a strong signal of an SRP violation. The cure is not to use a service locator or property injection — it is to split the class.</p>
<pre><code class="language-csharp">// Smells like an SRP violation
public class OrderProcessor
{
    public OrderProcessor(
        IOrderValidator validator,
        IInventoryChecker inventory,
        IPricingEngine pricing,
        IDiscountCalculator discounts,
        ITaxCalculator tax,
        IShippingCalculator shipping,
        IPaymentGateway payment,
        IOrderRepository repository,
        IEmailSender email,
        ISmsNotifier sms,
        IAuditLogger audit,
        IAnalyticsTracker analytics)
    {
        // 12 dependencies = multiple responsibilities
    }
}
</code></pre>
<p>Twelve dependencies means this class is doing too much. Some natural groupings emerge: pricing (pricing + discounts + tax + shipping), payment processing, persistence, and notification (email + SMS). Each group should be its own class.</p>
<h3 id="di-registration-as-documentation">DI Registration as Documentation</h3>
<p>Your <code>Program.cs</code> (or wherever you register services) is a map of your application's responsibilities. When it is well-organized, you can read it and understand the architecture:</p>
<pre><code class="language-csharp">// Each section registers classes for one responsibility area
// --- Business Logic ---
builder.Services.AddScoped&lt;InvoiceCalculator&gt;();
builder.Services.AddScoped&lt;TaxRateProvider&gt;();
builder.Services.AddScoped&lt;DiscountEngine&gt;();

// --- Persistence ---
builder.Services.AddScoped&lt;IInvoiceRepository, SqlInvoiceRepository&gt;();
builder.Services.AddScoped&lt;IOrderRepository, SqlOrderRepository&gt;();

// --- Notifications ---
builder.Services.AddScoped&lt;IEmailSender, SmtpEmailSender&gt;();
builder.Services.AddScoped&lt;InvoiceEmailSender&gt;();

// --- Orchestration ---
builder.Services.AddScoped&lt;InvoiceWorkflow&gt;();
builder.Services.AddScoped&lt;OrderService&gt;();
</code></pre>
<p>If you cannot organize your registrations into coherent groups, your classes probably do not have coherent responsibilities.</p>
<h2 id="part-8-srp-and-testing">Part 8: SRP and Testing</h2>
<p>Perhaps the most practical argument for SRP is that it makes testing dramatically easier. When a class has one responsibility, it has one reason to test. Its test setup is simple, its assertions are focused, and its test suite is easy to maintain.</p>
<h3 id="testing-a-class-with-multiple-responsibilities">Testing a Class with Multiple Responsibilities</h3>
<p>Consider testing the original <code>InvoiceService</code> from Part 3. To test the <code>CreateInvoice</code> method (business logic), you need to set up an <code>IDbConnection</code> and an <code>IEmailSender</code> — even though the method does not use them. This is a sign that the class has dependencies it should not have.</p>
<pre><code class="language-csharp">// Painful: unnecessary mocking
[Fact]
public void CreateInvoice_CalculatesCorrectTotal()
{
    // We have to create these even though CreateInvoice doesn't use them
    var mockDb = new Mock&lt;IDbConnection&gt;();
    var mockEmail = new Mock&lt;IEmailSender&gt;();

    var service = new InvoiceService(mockDb.Object, mockEmail.Object);

    var order = new Order
    {
        Id = 1,
        Items =
        [
            new OrderItem { ProductName = &quot;Widget&quot;, Quantity = 3, UnitPrice = 10.00m }
        ]
    };

    var invoice = service.CreateInvoice(order);

    Assert.Equal(30.00m, invoice.Subtotal);
    Assert.Equal(2.40m, invoice.Tax);
    Assert.Equal(32.40m, invoice.Total);
}
</code></pre>
<h3 id="testing-after-refactoring">Testing After Refactoring</h3>
<p>After splitting into <code>InvoiceCalculator</code>, the test is clean:</p>
<pre><code class="language-csharp">[Fact]
public void CreateInvoice_CalculatesCorrectTotal()
{
    var taxProvider = new FakeTaxRateProvider(rate: 0.08m);
    var calculator = new InvoiceCalculator(taxProvider);

    var order = new Order
    {
        Id = 1,
        Items =
        [
            new OrderItem { ProductName = &quot;Widget&quot;, Quantity = 3, UnitPrice = 10.00m }
        ]
    };

    var invoice = calculator.CreateInvoice(order);

    Assert.Equal(30.00m, invoice.Subtotal);
    Assert.Equal(2.40m, invoice.Tax);
    Assert.Equal(32.40m, invoice.Total);
}
</code></pre>
<p>No mock database. No mock email sender. Just the class under test and its actual dependency. The test is shorter, more readable, and more resilient to changes in unrelated parts of the system.</p>
<h3 id="testing-the-repository-in-isolation">Testing the Repository in Isolation</h3>
<pre><code class="language-csharp">[Fact]
public async Task Save_InsertsInvoiceAndLineItems()
{
    using var connection = new SqliteConnection(&quot;Data Source=:memory:&quot;);
    await connection.OpenAsync();
    await CreateTablesAsync(connection);

    var repository = new InvoiceRepository(connection);
    var invoice = new Invoice
    {
        OrderId = 42,
        Subtotal = 100m,
        Tax = 8m,
        Total = 108m,
        LineItems =
        [
            new InvoiceLineItem
            {
                Description = &quot;Widget&quot;,
                Quantity = 10,
                UnitPrice = 10m,
                Total = 100m
            }
        ]
    };

    repository.Save(invoice);

    var saved = await connection.QuerySingleAsync&lt;int&gt;(&quot;SELECT COUNT(*) FROM Invoices&quot;);
    Assert.Equal(1, saved);

    var lineItems = await connection.QuerySingleAsync&lt;int&gt;(&quot;SELECT COUNT(*) FROM InvoiceLineItems&quot;);
    Assert.Equal(1, lineItems);
}
</code></pre>
<p>This test exercises only persistence logic. It does not need to worry about tax rates or email templates. If the test fails, you know the problem is in the persistence code.</p>
<h3 id="testing-the-email-sender-in-isolation">Testing the Email Sender in Isolation</h3>
<pre><code class="language-csharp">[Fact]
public async Task SendAsync_FormatsInvoiceAsHtml()
{
    var mockEmail = new Mock&lt;IEmailSender&gt;();
    string? capturedBody = null;
    mockEmail
        .Setup(e =&gt; e.SendAsync(It.IsAny&lt;string&gt;(), It.IsAny&lt;string&gt;(), It.IsAny&lt;string&gt;()))
        .Callback&lt;string, string, string&gt;((to, subject, body) =&gt; capturedBody = body)
        .Returns(Task.CompletedTask);

    var sender = new InvoiceEmailSender(mockEmail.Object);
    var invoice = new Invoice
    {
        Id = 99,
        Subtotal = 50m,
        Tax = 4m,
        Total = 54m,
        LineItems = [new InvoiceLineItem { Description = &quot;Gadget&quot;, Quantity = 5, UnitPrice = 10m, Total = 50m }]
    };

    await sender.SendAsync(invoice, &quot;customer@example.com&quot;);

    Assert.NotNull(capturedBody);
    Assert.Contains(&quot;Invoice #99&quot;, capturedBody);
    Assert.Contains(&quot;Gadget&quot;, capturedBody);
}
</code></pre>
<p>Clean, focused, fast.</p>
<h3 id="the-testing-pyramid-and-srp">The Testing Pyramid and SRP</h3>
<p>SRP aligns naturally with the testing pyramid. When responsibilities are separated:</p>
<ul>
<li><strong>Unit tests</strong> cover individual classes (business logic, formatting, validation) with zero infrastructure dependencies. These are fast and numerous.</li>
<li><strong>Integration tests</strong> cover collaborations between classes (repository + real database, email sender + SMTP stub). These are slower but fewer.</li>
<li><strong>End-to-end tests</strong> cover complete workflows (place an order, verify the email). These are slowest and fewest.</li>
</ul>
<p>Without SRP, every test becomes an integration test because you cannot isolate any single concern. The testing pyramid collapses into a testing rectangle — slow, expensive, and brittle.</p>
<h2 id="part-9-srp-in-real-world.net-patterns">Part 9: SRP in Real-World .NET Patterns</h2>
<p>Let us examine how SRP manifests in several patterns you encounter daily in .NET development.</p>
<h3 id="the-repository-pattern">The Repository Pattern</h3>
<p>The repository pattern is a direct application of SRP: separate data access from business logic. A repository is responsible to one actor — whoever manages the data store.</p>
<pre><code class="language-csharp">public interface IProductRepository
{
    Task&lt;Product?&gt; GetByIdAsync(int id);
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetByCategoryAsync(string category);
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; SearchAsync(string query, int skip, int take);
    Task AddAsync(Product product);
    Task UpdateAsync(Product product);
    Task DeleteAsync(int id);
}
</code></pre>
<p>All methods in this interface relate to the same concern: storing and retrieving products. The interface does not include methods for calculating prices, generating reports, or sending notifications. Those belong elsewhere.</p>
<p>A common SRP violation in repositories is adding query methods that serve different actors:</p>
<pre><code class="language-csharp">// Bad: the repository is serving too many actors
public interface IProductRepository
{
    // Used by the catalog service (customer-facing)
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetActiveByCategoryAsync(string category);

    // Used by the admin dashboard (internal)
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetAllIncludingDeletedAsync();

    // Used by the analytics service (reporting)
    Task&lt;ProductSalesReport&gt; GetSalesReportAsync(DateTime from, DateTime to);

    // Used by the inventory service (operations)
    Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetLowStockAsync(int threshold);
}
</code></pre>
<p>The <code>GetSalesReportAsync</code> method does not belong here — it serves the analytics/reporting actor, not the data access actor. It should live in a separate <code>IProductReportingRepository</code> or a dedicated reporting service.</p>
<h3 id="the-mediatr-cqrs-pattern">The MediatR / CQRS Pattern</h3>
<p>The MediatR library and the Command Query Responsibility Segregation (CQRS) pattern are built on SRP. Each command handler has exactly one responsibility: handling one specific command.</p>
<pre><code class="language-csharp">public record CreateOrderCommand(int CustomerId, List&lt;OrderItemDto&gt; Items) : IRequest&lt;OrderResult&gt;;

public class CreateOrderHandler : IRequestHandler&lt;CreateOrderCommand, OrderResult&gt;
{
    private readonly IOrderRepository _repository;
    private readonly IPricingService _pricing;
    private readonly ILogger&lt;CreateOrderHandler&gt; _logger;

    public CreateOrderHandler(
        IOrderRepository repository,
        IPricingService pricing,
        ILogger&lt;CreateOrderHandler&gt; logger)
    {
        _repository = repository;
        _pricing = pricing;
        _logger = logger;
    }

    public async Task&lt;OrderResult&gt; Handle(CreateOrderCommand request, CancellationToken cancellationToken)
    {
        var pricedItems = await _pricing.CalculateAsync(request.Items);
        var order = await _repository.CreateAsync(request.CustomerId, pricedItems);

        _logger.LogInformation(&quot;Order {OrderId} created&quot;, order.Id);

        return OrderResult.Success(order);
    }
}
</code></pre>
<p>Each handler is a small, focused class with a single responsibility. You can test it in isolation, reason about it in isolation, and change it without affecting other handlers.</p>
<p>CQRS takes this further by separating the read side (queries) from the write side (commands). The read model can be optimized for fast queries while the write model is optimized for business rule enforcement — two different actors with two different needs.</p>
<h3 id="the-options-pattern">The Options Pattern</h3>
<p>ASP.NET Core's Options pattern (<code>IOptions&lt;T&gt;</code>) is an SRP-friendly way to manage configuration. Instead of one giant configuration object, you create focused configuration classes:</p>
<pre><code class="language-csharp">public class SmtpSettings
{
    public string Host { get; set; } = &quot;&quot;;
    public int Port { get; set; } = 587;
    public string Username { get; set; } = &quot;&quot;;
    public string Password { get; set; } = &quot;&quot;;
    public bool UseSsl { get; set; } = true;
}

public class InvoiceSettings
{
    public decimal DefaultTaxRate { get; set; } = 0.08m;
    public int PaymentTermDays { get; set; } = 30;
    public string CompanyName { get; set; } = &quot;&quot;;
}
</code></pre>
<p>Each settings class is responsible to one actor. The IT team manages SMTP settings. The finance team manages invoice settings. Changes to email configuration never accidentally affect invoice configuration.</p>
<h3 id="the-specification-pattern">The Specification Pattern</h3>
<p>The Specification pattern separates query criteria from query execution:</p>
<pre><code class="language-csharp">public class ActiveProductsInCategorySpec : Specification&lt;Product&gt;
{
    public ActiveProductsInCategorySpec(string category)
    {
        Where(p =&gt; p.IsActive &amp;&amp; p.Category == category);
        OrderBy(p =&gt; p.Name);
        Take(50);
    }
}

public class LowStockProductsSpec : Specification&lt;Product&gt;
{
    public LowStockProductsSpec(int threshold)
    {
        Where(p =&gt; p.Stock &lt; threshold);
        OrderByDescending(p =&gt; p.Stock);
    }
}
</code></pre>
<p>Each specification has a single responsibility: defining one set of query criteria. The repository handles execution. This keeps the repository from becoming a dumping ground for query methods.</p>
<h2 id="part-10-srp-at-the-architectural-level">Part 10: SRP at the Architectural Level</h2>
<p>SRP applies beyond individual classes. At the architectural level, it guides how you structure assemblies, projects, and services.</p>
<h3 id="project-structure">Project Structure</h3>
<p>A common .NET project structure reflects SRP at the assembly level:</p>
<pre><code>src/
  MyApp.Domain/           # Business entities, value objects, domain events
  MyApp.Application/      # Use cases, commands, queries, interfaces
  MyApp.Infrastructure/   # Database access, file system, external APIs
  MyApp.Web/              # HTTP endpoints, view models, middleware
</code></pre>
<p>Each project has one responsibility. <code>Domain</code> knows nothing about databases. <code>Infrastructure</code> knows nothing about HTTP. <code>Web</code> knows nothing about SQL. Changes to the database schema affect only <code>Infrastructure</code>. Changes to the API contract affect only <code>Web</code>.</p>
<h3 id="microservices-and-srp">Microservices and SRP</h3>
<p>Each microservice should have a single responsibility — serving one bounded context. A <code>UserService</code> that handles authentication, profile management, and recommendation engines is violating SRP at the service level.</p>
<p>The cost of splitting too aggressively at the microservice level is high — distributed systems are complex. But the cost of a monolithic service that multiple teams need to deploy independently is higher. SRP helps you find the right boundaries.</p>
<h3 id="the-vertical-slice-architecture">The Vertical Slice Architecture</h3>
<p>Vertical slice architecture, popularized by Jimmy Bogard, organizes code by feature rather than by layer. Each &quot;slice&quot; contains everything needed for one use case: the endpoint, the handler, the validator, and even the data access.</p>
<pre><code>Features/
  CreateOrder/
    CreateOrderEndpoint.cs
    CreateOrderHandler.cs
    CreateOrderValidator.cs
    CreateOrderRequest.cs
  GetOrderById/
    GetOrderByIdEndpoint.cs
    GetOrderByIdHandler.cs
    GetOrderByIdResponse.cs
</code></pre>
<p>This is SRP applied at the feature level. Each folder is responsible to one use case — one actor's need. Changes to order creation never touch order retrieval. It is a different organizational principle than the traditional layered architecture, but it serves the same SRP goal: isolating the things that change for different reasons.</p>
<h2 id="part-11-when-srp-goes-wrong-over-engineering-and-class-explosion">Part 11: When SRP Goes Wrong — Over-Engineering and Class Explosion</h2>
<p>Every principle, taken to its extreme, becomes a vice. SRP is no exception.</p>
<h3 id="the-one-method-per-class-trap">The One-Method-Per-Class Trap</h3>
<p>Some developers, upon learning SRP, start creating classes like:</p>
<pre><code class="language-csharp">public class UserEmailValidator
{
    public bool Validate(string email) =&gt; email.Contains('@');
}

public class UserNameValidator
{
    public bool Validate(string name) =&gt; !string.IsNullOrWhiteSpace(name);
}

public class UserAgeValidator
{
    public bool Validate(int age) =&gt; age &gt;= 18;
}

public class UserPasswordValidator
{
    public bool Validate(string password) =&gt; password.Length &gt;= 8;
}
</code></pre>
<p>Four classes for what should be one <code>UserValidator</code> class. All four serve the same actor (whoever defines the user validation rules), and all four change for the same reason (when validation rules change). Splitting them is not SRP — it is fragmentation.</p>
<p>The correct application of SRP groups them together:</p>
<pre><code class="language-csharp">public class UserValidator
{
    public ValidationResult Validate(User user)
    {
        var errors = new List&lt;string&gt;();

        if (string.IsNullOrWhiteSpace(user.Name))
            errors.Add(&quot;Name is required&quot;);

        if (!user.Email.Contains('@'))
            errors.Add(&quot;Invalid email format&quot;);

        if (user.Age &lt; 18)
            errors.Add(&quot;Must be at least 18 years old&quot;);

        if (user.Password.Length &lt; 8)
            errors.Add(&quot;Password must be at least 8 characters&quot;);

        return new ValidationResult(errors);
    }
}
</code></pre>
<p>One class, one responsibility: validating users. The fact that it checks multiple fields does not make it multi-responsible.</p>
<h3 id="the-interface-explosion-problem">The Interface Explosion Problem</h3>
<p>Over-zealous SRP can also lead to an explosion of interfaces:</p>
<pre><code class="language-csharp">public interface IUserCreator { Task CreateAsync(User user); }
public interface IUserUpdater { Task UpdateAsync(User user); }
public interface IUserDeleter { Task DeleteAsync(int id); }
public interface IUserFinder { Task&lt;User?&gt; FindAsync(int id); }
public interface IUserSearcher { Task&lt;List&lt;User&gt;&gt; SearchAsync(string query); }
</code></pre>
<p>Five interfaces for what should be one <code>IUserRepository</code>. Again, all five serve the same actor and change for the same reason. The Interface Segregation Principle (ISP) says clients should not depend on methods they do not use — but that does not mean every method gets its own interface. It means you split along client boundaries, not along method boundaries.</p>
<h3 id="finding-the-right-granularity">Finding the Right Granularity</h3>
<p>The right level of granularity depends on your actual actors. Ask these questions:</p>
<ol>
<li><strong>Who will ask me to change this class?</strong> If the answer is one person or one team, it is probably fine.</li>
<li><strong>When I change one method, do I risk breaking the others?</strong> If the methods are independent and non-interacting, they might belong in separate classes. If they share state and logic, they probably belong together.</li>
<li><strong>Can I test this class without complex setup?</strong> If you need ten mocks in your test constructor, the class is doing too much. If you need zero dependencies, you might have split too aggressively and lost the ability to verify meaningful behavior.</li>
<li><strong>Would a new team member understand this class in five minutes?</strong> If the class is 30 lines and does one obvious thing, great. If it is 30 lines spread across five files in three folders, you have traded one kind of complexity for another.</li>
</ol>
<h2 id="part-12-srp-and-related-principles">Part 12: SRP and Related Principles</h2>
<p>SRP does not exist in isolation. It interacts with the other SOLID principles and with broader design principles.</p>
<h3 id="srp-and-the-openclosed-principle-ocp">SRP and the Open/Closed Principle (OCP)</h3>
<p>OCP says that software entities should be open for extension but closed for modification. SRP makes OCP easier to achieve. When a class has a single responsibility, you can extend its behavior by creating a new class rather than modifying the existing one.</p>
<p>For example, if <code>InvoiceCalculator</code> only handles standard tax calculation, you can create a <code>DiscountedInvoiceCalculator</code> that extends it (via inheritance or composition) rather than adding discount logic to the existing class. SRP keeps each class focused enough that extension points are clear.</p>
<h3 id="srp-and-the-liskov-substitution-principle-lsp">SRP and the Liskov Substitution Principle (LSP)</h3>
<p>LSP says that subtypes must be substitutable for their base types. SRP violations often lead to LSP violations. When a base class has multiple responsibilities, subtypes may need to override some behavior while leaving others unchanged — and the overrides can break expectations.</p>
<p>Consider a base class <code>Notification</code> with methods <code>Send()</code> and <code>Log()</code>. An <code>SmsNotification</code> subclass might override <code>Send()</code> but need a completely different <code>Log()</code> implementation because SMS logging has different requirements. The two responsibilities (sending and logging) should have been separate from the start.</p>
<h3 id="srp-and-the-interface-segregation-principle-isp">SRP and the Interface Segregation Principle (ISP)</h3>
<p>ISP is SRP applied to interfaces. A &quot;fat&quot; interface that serves multiple actors should be split into smaller, focused interfaces — each serving one actor.</p>
<pre><code class="language-csharp">// Fat interface serving multiple actors
public interface IUserService
{
    Task&lt;User&gt; GetByIdAsync(int id);        // Read by many
    Task CreateAsync(User user);             // Write by admin
    Task DeactivateAsync(int id);            // Write by compliance
    Task&lt;UserReport&gt; GenerateReportAsync();   // Read by analytics
}

// Split by actor
public interface IUserReader
{
    Task&lt;User&gt; GetByIdAsync(int id);
}

public interface IUserAdmin
{
    Task CreateAsync(User user);
    Task DeactivateAsync(int id);
}

public interface IUserReporting
{
    Task&lt;UserReport&gt; GenerateReportAsync();
}
</code></pre>
<h3 id="srp-and-the-dependency-inversion-principle-dip">SRP and the Dependency Inversion Principle (DIP)</h3>
<p>DIP says that high-level modules should not depend on low-level modules — both should depend on abstractions. SRP makes this practical. When each class has a single responsibility, the abstractions (interfaces) it exposes are small and focused. A <code>IInvoiceCalculator</code> interface with two methods is easy to mock and easy to implement. A <code>IInvoiceService</code> interface with fifteen methods spanning three responsibilities is a pain point.</p>
<h3 id="srp-and-separation-of-concerns">SRP and Separation of Concerns</h3>
<p>Separation of Concerns is the broader principle from which SRP derives. While SRP focuses on the class level and defines &quot;concern&quot; as &quot;an actor's needs,&quot; Separation of Concerns applies at every level — from the lines within a method to the services in a distributed system.</p>
<p>The MVC pattern is Separation of Concerns at the UI level: Model (data), View (presentation), Controller (user input). The layered architecture is Separation of Concerns at the application level: presentation, business logic, data access. SRP provides a specific, testable criterion for evaluating whether concerns are adequately separated.</p>
<h2 id="part-13-applying-srp-in-blazor-webassembly">Part 13: Applying SRP in Blazor WebAssembly</h2>
<p>Since Observer Magazine is built on Blazor WebAssembly, let us look at how SRP applies specifically to Blazor components and services.</p>
<h3 id="components-should-not-contain-business-logic">Components Should Not Contain Business Logic</h3>
<p>A Blazor component's responsibility is rendering UI and handling user interactions. Business logic — calculations, validations, data transformations — belongs in services.</p>
<pre><code class="language-csharp">// Bad: business logic in the component
@code {
    private List&lt;CartItem&gt; _items = new();

    private decimal CalculateTotal()
    {
        var subtotal = _items.Sum(i =&gt; i.Price * i.Quantity);
        var discount = subtotal &gt; 100 ? subtotal * 0.10m : 0;
        var tax = (subtotal - discount) * 0.08m;
        return subtotal - discount + tax;
    }

    private bool CanCheckout()
    {
        return _items.Count &gt; 0
            &amp;&amp; _items.All(i =&gt; i.Quantity &gt; 0)
            &amp;&amp; _items.Sum(i =&gt; i.Price * i.Quantity) &gt;= 5.00m;
    }
}
</code></pre>
<pre><code class="language-csharp">// Good: component delegates to a service
@inject ICartService CartService

@code {
    private List&lt;CartItem&gt; _items = new();
    private decimal _total;
    private bool _canCheckout;

    private async Task RefreshAsync()
    {
        _total = CartService.CalculateTotal(_items);
        _canCheckout = CartService.CanCheckout(_items);
    }
}
</code></pre>
<p>The component renders and delegates. The service calculates and validates. Each can be tested independently.</p>
<h3 id="separate-data-fetching-from-data-presentation">Separate Data Fetching from Data Presentation</h3>
<p>A common pattern in Blazor is to fetch data in <code>OnInitializedAsync</code> and render it in the markup. When the fetch logic becomes complex (caching, error handling, retry logic), extract it into a service.</p>
<pre><code class="language-csharp">// The component focuses on UI state management
@inject IBlogService BlogService

@if (_loading)
{
    &lt;p&gt;Loading...&lt;/p&gt;
}
else if (_error is not null)
{
    &lt;p class=&quot;error&quot;&gt;@_error&lt;/p&gt;
}
else
{
    @foreach (var post in _posts)
    {
        &lt;BlogCard Post=&quot;post&quot; /&gt;
    }
}

@code {
    private BlogPostMetadata[] _posts = [];
    private bool _loading = true;
    private string? _error;

    protected override async Task OnInitializedAsync()
    {
        try
        {
            _posts = await BlogService.GetPostsAsync();
        }
        catch (Exception ex)
        {
            _error = &quot;Failed to load blog posts. Please try again later.&quot;;
        }
        finally
        {
            _loading = false;
        }
    }
}
</code></pre>
<p>The component handles UI states (loading, error, success). The <code>BlogService</code> handles HTTP calls, caching, and deserialization. The component does not know or care where the data comes from.</p>
<h3 id="css-isolation-and-srp">CSS Isolation and SRP</h3>
<p>Blazor's component-scoped CSS (<code>.razor.css</code> files) is an application of SRP to styles. Each component owns its own styles. Changes to the <code>BlogCard</code> component's appearance do not affect <code>ProductCard</code>. This eliminates the &quot;CSS blast radius&quot; problem where a global style change breaks unrelated pages.</p>
<pre><code class="language-css">/* BlogCard.razor.css — only affects BlogCard */
.blog-card {
    border: 1px solid var(--border-color);
    padding: 1rem;
    border-radius: 8px;
    margin-bottom: 1rem;
}

.blog-card h3 {
    margin-top: 0;
}
</code></pre>
<p>This is exactly the same principle as SRP for classes — scope the concern so that changes in one area do not ripple into others.</p>
<h2 id="part-14-a-checklist-for-evaluating-srp">Part 14: A Checklist for Evaluating SRP</h2>
<p>Here is a practical checklist you can apply to any class, module, or service in your codebase. Not every &quot;yes&quot; answer means you have a violation — these are signals, not rules. But if you answer &quot;yes&quot; to three or more, it is worth investigating.</p>
<p><strong>Actor Analysis:</strong></p>
<ul>
<li>Can you identify more than one stakeholder or team who might request changes to this class?</li>
<li>Have you received change requests from different sources that both touched this class?</li>
<li>Does this class appear in merge conflicts between developers working on unrelated features?</li>
</ul>
<p><strong>Dependency Analysis:</strong></p>
<ul>
<li>Does the constructor take more than four or five dependencies?</li>
<li>Are any dependencies completely unused by some methods?</li>
<li>Can you group the dependencies into two or more unrelated clusters?</li>
</ul>
<p><strong>Method Analysis:</strong></p>
<ul>
<li>Do some methods operate on a completely different subset of fields than others?</li>
<li>Would you need the word &quot;and&quot; to describe what this class does?</li>
<li>Does the class mix different levels of abstraction (e.g., business logic and SQL strings)?</li>
</ul>
<p><strong>Testing Analysis:</strong></p>
<ul>
<li>Do you need complex test setup that includes mock objects the test never actually exercises?</li>
<li>Is it hard to name your test class because the class under test does not have a clear, single purpose?</li>
<li>Do tests for one concern break when you change code related to a different concern?</li>
</ul>
<p><strong>Naming Analysis:</strong></p>
<ul>
<li>Does the class name include words like &quot;Manager,&quot; &quot;Processor,&quot; &quot;Handler,&quot; &quot;Service,&quot; or &quot;Utility&quot; without further qualification? (These are often catch-all names for multi-responsibility classes.)</li>
<li>Would adding a more specific suffix improve clarity? For example, <code>OrderProcessor</code> could be split into <code>OrderValidator</code>, <code>OrderPricer</code>, and <code>OrderPersister</code>.</li>
</ul>
<h2 id="part-15-srp-in-practice-a-decision-framework">Part 15: SRP in Practice — A Decision Framework</h2>
<p>Theory is important, but daily development requires practical decisions. Here is a framework for deciding when and how to apply SRP.</p>
<h3 id="when-to-split">When to Split</h3>
<p>Split a class when:</p>
<ol>
<li><p><strong>Different actors need different changes.</strong> This is the textbook case. If the finance team wants to change how discounts work and the marketing team wants to change how promotions display, and both changes touch the same class, split it.</p>
</li>
<li><p><strong>Testing is painful.</strong> If you need ten mocks to test one method, the class is doing too much. Split it so each piece can be tested with minimal setup.</p>
</li>
<li><p><strong>The class is growing without bound.</strong> If a class keeps accumulating methods every sprint, it is probably a dumping ground. New methods should make you ask: &quot;Does this belong here, or does it need a new home?&quot;</p>
</li>
<li><p><strong>Merge conflicts are frequent.</strong> If two developers keep stepping on each other in the same file, the file has too many responsibilities.</p>
</li>
</ol>
<h3 id="when-not-to-split">When NOT to Split</h3>
<p>Do not split when:</p>
<ol>
<li><p><strong>All methods serve the same actor.</strong> A class with ten methods that all serve the same actor's needs is not violating SRP, even if it feels large.</p>
</li>
<li><p><strong>Splitting would scatter related logic.</strong> If understanding one concern requires jumping between five files in three folders, you have gone too far. Cohesion matters.</p>
</li>
<li><p><strong>The &quot;violation&quot; is purely theoretical.</strong> If a class technically serves two actors but one of them has not changed in three years and is unlikely to ever change, the violation is harmless. Refactor when the pain is real, not when the principle is theoretically violated.</p>
</li>
<li><p><strong>You are writing a prototype or spike.</strong> SRP matters most in code that will be maintained. If you are writing a throwaway prototype to test an idea, do not spend hours on perfect separation. Just make it work. If the prototype succeeds and becomes production code, then refactor.</p>
</li>
</ol>
<h3 id="the-refactoring-trigger">The Refactoring Trigger</h3>
<p>The best time to apply SRP is not during initial development — it is when you feel the pain of a violation. The second time you need to change a class for an unrelated reason, that is your signal. The first time might be coincidence. The second time is a pattern. Refactor on the second occurrence.</p>
<p>This aligns with the &quot;Rule of Three&quot; from Martin Fowler: the first time you do something, just do it. The second time, wince. The third time, refactor.</p>
<h2 id="part-16-common-srp-violations-in-the-wild">Part 16: Common SRP Violations in the Wild</h2>
<p>Let us catalog the SRP violations you are most likely to encounter in real .NET codebases.</p>
<h3 id="the-god-controller">The God Controller</h3>
<p>We covered this in Part 6, but it bears repeating because it is everywhere. A controller that validates input, applies business rules, accesses the database, and formats the response is the most common SRP violation in ASP.NET applications.</p>
<h3 id="the-entity-with-behavior">The Entity with Behavior</h3>
<p>Domain-driven design (DDD) encourages putting behavior on entities. But there is a line between &quot;behavior that belongs to this concept&quot; and &quot;behavior that belongs to a different actor.&quot;</p>
<pre><code class="language-csharp">// The entity has crossed the line
public class Order
{
    public int Id { get; set; }
    public List&lt;OrderItem&gt; Items { get; set; } = new();
    public decimal Total =&gt; Items.Sum(i =&gt; i.Total);

    // Fine: domain behavior
    public void AddItem(Product product, int quantity)
    {
        Items.Add(new OrderItem(product, quantity));
    }

    // Questionable: persistence concern
    public void SaveToDatabase(IDbConnection db)
    {
        db.Execute(&quot;INSERT INTO Orders ...&quot;, this);
    }

    // Violation: presentation concern
    public string ToEmailHtml()
    {
        return $&quot;&lt;h1&gt;Order #{Id}&lt;/h1&gt;...&quot;;
    }

    // Violation: external API concern
    public async Task SyncToErpAsync(IErpClient client)
    {
        await client.PostOrderAsync(this);
    }
}
</code></pre>
<p>The <code>AddItem</code> method is legitimate domain behavior — it enforces business rules about what can be added to an order. But <code>SaveToDatabase</code>, <code>ToEmailHtml</code>, and <code>SyncToErpAsync</code> serve completely different actors and belong in separate classes.</p>
<h3 id="the-utility-class">The Utility Class</h3>
<pre><code class="language-csharp">public static class Helpers
{
    public static string FormatCurrency(decimal amount) { ... }
    public static bool IsValidEmail(string email) { ... }
    public static byte[] CompressGzip(byte[] data) { ... }
    public static DateTime ParseFlexibleDate(string input) { ... }
    public static string Slugify(string title) { ... }
    public static int LevenshteinDistance(string a, string b) { ... }
}
</code></pre>
<p>This class is a textbook example of <strong>coincidental cohesion</strong> — the lowest form. These methods have nothing in common except that someone did not know where else to put them. They should be in separate, well-named static classes: <code>CurrencyFormatter</code>, <code>EmailValidator</code>, <code>CompressionHelper</code>, <code>DateParser</code>, <code>SlugGenerator</code>, <code>StringDistance</code>.</p>
<h3 id="the-configuration-dumping-ground">The Configuration Dumping Ground</h3>
<pre><code class="language-csharp">public class AppSettings
{
    public string DatabaseConnectionString { get; set; } = &quot;&quot;;
    public string SmtpHost { get; set; } = &quot;&quot;;
    public int SmtpPort { get; set; } = 587;
    public string JwtSecret { get; set; } = &quot;&quot;;
    public int JwtExpirationMinutes { get; set; } = 60;
    public string StorageBucket { get; set; } = &quot;&quot;;
    public decimal DefaultTaxRate { get; set; } = 0.08m;
    public int MaxLoginAttempts { get; set; } = 5;
    public string SupportEmail { get; set; } = &quot;&quot;;
}
</code></pre>
<p>Every class in the system depends on <code>AppSettings</code>, but each class only uses one or two properties. Use the Options pattern to split this into focused configuration classes. We covered this in Part 9.</p>
<h2 id="part-17-srp-across-the-software-development-lifecycle">Part 17: SRP Across the Software Development Lifecycle</h2>
<p>SRP is not just a coding principle. It applies to processes, teams, and tooling.</p>
<h3 id="srp-in-source-control">SRP in Source Control</h3>
<p>Each commit should have a single responsibility — one logical change. A commit that &quot;adds discount feature, fixes email bug, and updates NuGet packages&quot; is the source control equivalent of a God class. It is harder to review, harder to revert, and harder to bisect.</p>
<pre><code class="language-bash"># Bad: one commit doing three things
git commit -m &quot;Add discount feature, fix email bug, update packages&quot;

# Good: three focused commits
git commit -m &quot;feat: add percentage-based discount calculation&quot;
git commit -m &quot;fix: correct email template encoding for special characters&quot;
git commit -m &quot;chore: update NuGet packages to latest stable versions&quot;
</code></pre>
<h3 id="srp-in-cicd-pipelines">SRP in CI/CD Pipelines</h3>
<p>Each stage in your pipeline should have a single responsibility:</p>
<pre><code class="language-yaml">jobs:
  build:        # Compile the code
  test:         # Run the tests
  analyze:      # Run static analysis
  package:      # Create deployment artifacts
  deploy-staging: # Deploy to staging
  deploy-prod:  # Deploy to production
</code></pre>
<p>Mixing build and test in a single stage makes failures harder to diagnose. Mixing deploy with test makes rollbacks harder to orchestrate.</p>
<h3 id="srp-in-documentation">SRP in Documentation</h3>
<p>Each documentation file should cover one topic. A single README that explains installation, architecture, API reference, deployment, and troubleshooting is a God document. Split it:</p>
<pre><code>docs/
  getting-started.md
  architecture.md
  api-reference.md
  deployment.md
  troubleshooting.md
</code></pre>
<h3 id="srp-in-team-organization">SRP in Team Organization</h3>
<p>Conway's Law says that organizations design systems that mirror their communication structures. If one team owns both the billing system and the notification system, those systems will tend to be coupled. SRP at the team level means giving each team ownership of one area of the business — and the code boundaries should follow.</p>
<h2 id="part-18-summary-and-key-takeaways">Part 18: Summary and Key Takeaways</h2>
<p>The Single Responsibility Principle, correctly understood, is not about class size, method count, or even the number of &quot;things&quot; a class does. It is about the number of actors — the groups of stakeholders whose needs drive changes to your code.</p>
<p>Here are the key takeaways:</p>
<p><strong>The definition:</strong> A module should be responsible to one, and only one, actor.</p>
<p><strong>The purpose:</strong> To prevent changes requested by one actor from accidentally breaking functionality used by another actor.</p>
<p><strong>The mechanism:</strong> Group together the things that change for the same reasons. Separate the things that change for different reasons.</p>
<p><strong>The balance:</strong> SRP is a guideline, not a law. Applying it dogmatically leads to class explosion and unnecessary complexity. Ignoring it leads to fragile, untestable, conflict-prone code. The sweet spot is somewhere in between, guided by real pain points rather than theoretical purity.</p>
<p><strong>The practice:</strong> You do not need to get SRP right on the first pass. Write the code, feel the pain, then refactor. The second time you change a class for an unrelated reason is your signal to split.</p>
<p><strong>The test:</strong> If you can test a class with simple setup and focused assertions, SRP is probably in good shape. If testing requires a Christmas tree of mock objects, something needs splitting.</p>
<h2 id="resources">Resources</h2>
<ul>
<li>Martin, Robert C. <em>Agile Software Development, Principles, Patterns, and Practices.</em> Pearson, 2003. The book where SRP was first formalized as part of the SOLID principles.</li>
<li>Martin, Robert C. &quot;The Single Responsibility Principle.&quot; <a href="https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html">blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html</a>. The 2014 blog post clarifying the &quot;reason to change&quot; definition.</li>
<li>Martin, Robert C. <em>Clean Architecture: A Craftsman's Guide to Software Structure and Design.</em> Pearson, 2017. Contains the final formulation of SRP with the &quot;actor&quot; definition.</li>
<li>DeMarco, Tom. <em>Structured Analysis and System Specification.</em> Yourdon Press, 1978. The origin of the cohesion concept that SRP builds upon.</li>
<li>Page-Jones, Meilir. <em>The Practical Guide to Structured Systems Design.</em> Yourdon Press, 1980. Formalizes the spectrum of cohesion types.</li>
<li>Fowler, Martin. <em>Refactoring: Improving the Design of Existing Code.</em> 2nd ed. Addison-Wesley, 2018. Practical techniques for refactoring toward better responsibility separation. <a href="https://refactoring.com/">refactoring.com</a></li>
<li>Microsoft. &quot;Dependency injection in ASP.NET Core.&quot; <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection">learn.microsoft.com/aspnet/core/fundamentals/dependency-injection</a>. Official documentation on DI, which works hand-in-hand with SRP.</li>
<li>Microsoft. &quot;ASP.NET Core Blazor component-scoped CSS.&quot; <a href="https://learn.microsoft.com/en-us/aspnet/core/blazor/components/css-isolation">learn.microsoft.com/aspnet/core/blazor/components/css-isolation</a>. CSS isolation as SRP applied to component styles.</li>
<li>Bogard, Jimmy. &quot;Vertical Slice Architecture.&quot; <a href="https://www.jimmybogard.com/vertical-slice-architecture/">jimmybogard.com/vertical-slice-architecture</a>. An alternative to layered architecture that applies SRP at the feature level.</li>
<li>DigitalOcean. &quot;SOLID: The First Five Principles of Object-Oriented Design.&quot; <a href="https://www.digitalocean.com/community/conceptual-articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design">digitalocean.com/community/conceptual-articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design</a>. A thorough walkthrough of all five SOLID principles with code examples.</li>
</ul>
<p>Now let me verify the key historical facts about the Liskov Substitution Principle before writing.I have all the historical facts verified. Now let me write the comprehensive article.Here's your complete blog article. Save it as:</p>
<p><strong><code>content/blog/2026-03-27-liskov-substitution-principle.md</code></strong></p>
<p>Then commit and push — GitHub Actions will handle the rest.</p>
<p>The article covers 16 parts spanning roughly 7,000+ words, including:</p>
<ul>
<li>Barbara Liskov's full history (Stanford PhD 1968, CLU, OOPSLA 1987 keynote, 1994 paper with Wing, 2008 Turing Award)</li>
<li>The principle in plain language with a vending machine analogy</li>
<li>All five formal rules: precondition contravariance, postcondition covariance, invariant preservation, the history constraint, and exception compatibility — each with C# code examples</li>
<li>Five classic violations: Rectangle/Square, read-only collections inheriting List&lt;T&gt;, NotImplementedException, ignored parameters, temporal coupling</li>
<li>LSP in the .NET framework itself (Stream, ICollection&lt;T&gt; vs IReadOnlyCollection&lt;T&gt;, array covariance)</li>
<li>Design patterns that help (Strategy, Template Method, Decorator) and patterns that risk violations (Adapter, Null Object)</li>
<li>LSP + dependency injection in ASP.NET Core with contract test patterns</li>
<li>Generics variance (covariance, contravariance, invariance)</li>
<li>Detection techniques (grep for NotImplementedException, type checks, contract tests, Roslyn analyzers)</li>
<li>Interaction with the other SOLID principles</li>
<li>A practical checklist</li>
<li>Resources and further reading</li>
</ul>
]]></content:encoded>
      <category>solid</category>
      <category>design-principles</category>
      <category>csharp</category>
      <category>dotnet</category>
      <category>architecture</category>
      <category>best-practices</category>
      <category>deep-dive</category>
    </item>
    <item>
      <title>SOLID Principles: A Complete Guide to Writing Clean, Maintainable Object-Oriented Code</title>
      <link>https://observermagazine.github.io/blog/solid-principles</link>
      <description>An exhaustive deep dive into all five SOLID principles — Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion — with C# examples, historical context, real-world scenarios, common violations, and practical guidance for .NET developers.</description>
      <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/solid-principles</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<p>If you have been writing software for any meaningful length of time, you have almost certainly felt the slow creep of rot. A codebase that was once small and elegant becomes tangled and fragile. A class that started with thirty lines now has three hundred. A change in one corner of the system triggers failures in another. You deploy on a Friday afternoon and your phone buzzes all weekend.</p>
<p>The SOLID principles are a set of five design guidelines that exist precisely to fight that decay. They are not a silver bullet, nor are they a rigid checklist that you must follow dogmatically in every file you write. They are, however, among the most battle-tested heuristics in object-oriented programming for keeping code maintainable, testable, and extensible over the life of a project.</p>
<p>In this article, we will work through all five principles in full detail: where they came from, what they mean in precise terms, how to apply them in C# and .NET, how to spot violations, and what tradeoffs to keep in mind. Every principle gets real, compilable code examples — not toy pseudocode, but scenarios you might encounter in a production system.</p>
<h2 id="part-1-history-and-context-where-solid-came-from">Part 1: History and Context — Where SOLID Came From</h2>
<h3 id="the-origins">The Origins</h3>
<p>The five principles that compose the SOLID acronym were not all invented by the same person at the same time. They emerged over roughly a decade of thought by several computer scientists and were unified under a single banner by Robert C. Martin — universally known as &quot;Uncle Bob.&quot;</p>
<p>Robert C. Martin first collected and articulated these principles in his 2000 paper <em>Design Principles and Design Patterns</em>, where he described the symptoms of rotting software (rigidity, fragility, immobility, viscosity) and proposed a set of principles to combat them. The actual acronym &quot;SOLID&quot; was coined around 2004 by Michael Feathers, who rearranged the initial letters of the five principles into a memorable word.</p>
<p>But the individual principles have deeper roots:</p>
<ul>
<li><strong>Single Responsibility Principle (SRP)</strong>: Articulated by Robert C. Martin, drawing on ideas about cohesion that go back to Tom DeMarco and Meilir Page-Jones in the 1970s and 1980s.</li>
<li><strong>Open/Closed Principle (OCP)</strong>: First defined by Bertrand Meyer in his 1988 book <em>Object-Oriented Software Construction</em>. Meyer's original formulation relied on implementation inheritance; Martin later reinterpreted it using polymorphism and abstraction.</li>
<li><strong>Liskov Substitution Principle (LSP)</strong>: Introduced by Barbara Liskov in her 1987 keynote <em>Data Abstraction and Hierarchy</em>, and formalized in a 1994 paper with Jeannette Wing. It draws on Bertrand Meyer's Design by Contract concepts.</li>
<li><strong>Interface Segregation Principle (ISP)</strong>: Articulated by Robert C. Martin while consulting for Xerox in the 1990s. The principle arose from a real problem with a large, monolithic interface in a printer system.</li>
<li><strong>Dependency Inversion Principle (DIP)</strong>: Formulated by Robert C. Martin, building on the broader idea that high-level policy should not depend on low-level detail.</li>
</ul>
<p>Martin later expanded on all five in his 2003 book <em>Agile Software Development: Principles, Patterns, and Practices</em> and its 2006 C# edition with Micah Martin.</p>
<h3 id="why-solid-still-matters-in-2026">Why SOLID Still Matters in 2026</h3>
<p>You might wonder whether principles conceived in the late 1980s through the early 2000s are still relevant in an era of microservices, serverless functions, functional programming, and AI-assisted code generation. The answer is a firm yes — though with some nuance.</p>
<p>The underlying problems that SOLID addresses — managing dependencies, isolating change, reducing coupling, enabling testability — are universal to software engineering regardless of paradigm or architecture. A microservice with tangled internal dependencies is just as painful to maintain as a monolithic class with too many responsibilities. A serverless function that depends on concrete implementations is just as hard to test as a desktop application with the same problem.</p>
<p>What has changed is the scale at which these principles apply. In 2000, SOLID was primarily discussed in the context of classes within a single application. Today, the same ideas apply at the level of modules, packages, services, and even entire systems. The Single Responsibility Principle can be applied to a function, a class, a NuGet package, or a microservice. Dependency Inversion shows up in hexagonal architecture, clean architecture, and any system that uses ports and adapters.</p>
<p>Let us now work through each principle in detail.</p>
<h2 id="part-2-the-single-responsibility-principle-srp">Part 2: The Single Responsibility Principle (SRP)</h2>
<h3 id="the-definition">The Definition</h3>
<p>Robert C. Martin's original formulation of the Single Responsibility Principle is:</p>
<blockquote>
<p>A class should have one, and only one, reason to change.</p>
</blockquote>
<p>The key phrase is &quot;reason to change.&quot; A &quot;reason to change&quot; corresponds to a stakeholder or an actor — a person or group of people who might request a change to the software. If a class serves multiple actors, changes requested by one actor might break the code that serves another.</p>
<p>Martin later refined this definition in his 2018 book <em>Clean Architecture</em>:</p>
<blockquote>
<p>A module should be responsible to one, and only one, actor.</p>
</blockquote>
<p>This is a subtle but important shift. It is not about the class doing &quot;only one thing&quot; in the most literal sense — a class can have multiple methods and still have a single responsibility. The question is whether those methods all serve the same actor or the same axis of change.</p>
<h3 id="a-violation-in-the-wild">A Violation in the Wild</h3>
<p>Imagine you are building an employee management system. You write a class like this:</p>
<pre><code class="language-csharp">public class Employee
{
    public string Name { get; set; } = &quot;&quot;;
    public decimal Salary { get; set; }
    public string Department { get; set; } = &quot;&quot;;

    // Used by the HR department to calculate pay
    public decimal CalculatePay()
    {
        // Complex payroll logic: overtime, benefits, deductions
        return Salary * 1.0m; // simplified
    }

    // Used by the reporting team to generate reports
    public string GeneratePerformanceReport()
    {
        return $&quot;Performance report for {Name} in {Department}&quot;;
    }

    // Used by the DBA team to persist data
    public void SaveToDatabase(string connectionString)
    {
        // ADO.NET or EF Core logic to save the employee
        Console.WriteLine($&quot;Saving {Name} to database...&quot;);
    }
}
</code></pre>
<p>This class has three reasons to change:</p>
<ol>
<li>The HR department changes the payroll calculation rules.</li>
<li>The reporting team changes the report format.</li>
<li>The DBA team changes the database schema or persistence strategy.</li>
</ol>
<p>Each of these changes serves a different actor. If the reporting team asks for a new column in the performance report, you modify the <code>Employee</code> class — and now the payroll calculation code and the persistence code must be recompiled, retested, and redeployed, even though they did not change.</p>
<h3 id="applying-srp">Applying SRP</h3>
<p>The fix is to separate these responsibilities into distinct classes:</p>
<pre><code class="language-csharp">// The Employee class is now a pure data model
public class Employee
{
    public int Id { get; set; }
    public string Name { get; set; } = &quot;&quot;;
    public decimal Salary { get; set; }
    public string Department { get; set; } = &quot;&quot;;
}

// Responsibility: payroll calculations (serves the HR actor)
public class PayrollCalculator
{
    public decimal CalculatePay(Employee employee)
    {
        // All the complex payroll logic lives here
        var basePay = employee.Salary;
        var deductions = basePay * 0.08m; // example: 8% deductions
        return basePay - deductions;
    }
}

// Responsibility: generating reports (serves the reporting actor)
public class PerformanceReportGenerator
{
    public string Generate(Employee employee)
    {
        var sb = new StringBuilder();
        sb.AppendLine($&quot;Performance Report: {employee.Name}&quot;);
        sb.AppendLine($&quot;Department: {employee.Department}&quot;);
        sb.AppendLine($&quot;Generated: {DateTime.UtcNow:yyyy-MM-dd}&quot;);
        return sb.ToString();
    }
}

// Responsibility: persistence (serves the DBA/infrastructure actor)
public class EmployeeRepository
{
    private readonly string _connectionString;

    public EmployeeRepository(string connectionString)
    {
        _connectionString = connectionString;
    }

    public void Save(Employee employee)
    {
        // EF Core, Dapper, ADO.NET — whatever the persistence strategy is
        Console.WriteLine($&quot;Saving employee {employee.Id} to database...&quot;);
    }

    public Employee? GetById(int id)
    {
        // Retrieve from database
        Console.WriteLine($&quot;Loading employee {id} from database...&quot;);
        return null; // simplified
    }
}
</code></pre>
<p>Now each class has one reason to change. The <code>PayrollCalculator</code> changes only when payroll rules change. The <code>PerformanceReportGenerator</code> changes only when the report format changes. The <code>EmployeeRepository</code> changes only when the persistence strategy changes. The <code>Employee</code> class itself changes only when the data model changes.</p>
<h3 id="srp-in-asp.net-and-blazor">SRP in ASP.NET and Blazor</h3>
<p>In the ASP.NET world, SRP shows up frequently in controller and service design. A common violation is the &quot;god controller&quot; that handles authentication, business logic, validation, and response formatting all in one class:</p>
<pre><code class="language-csharp">// Violation: this controller does too much
[ApiController]
[Route(&quot;api/[controller]&quot;)]
public class OrdersController : ControllerBase
{
    private readonly DbContext _db;

    public OrdersController(DbContext db) =&gt; _db = db;

    [HttpPost]
    public async Task&lt;IActionResult&gt; CreateOrder(CreateOrderRequest request)
    {
        // Validation logic (should be in a validator)
        if (string.IsNullOrEmpty(request.CustomerEmail))
            return BadRequest(&quot;Email is required&quot;);

        // Business rules (should be in a service)
        var discount = request.Total &gt; 100 ? 0.1m : 0m;
        var finalTotal = request.Total * (1 - discount);

        // Persistence (should be in a repository)
        var order = new Order { Total = finalTotal, Email = request.CustomerEmail };
        _db.Orders.Add(order);
        await _db.SaveChangesAsync();

        // Notification (should be in a notification service)
        await SendEmailAsync(request.CustomerEmail, &quot;Order Confirmed&quot;, $&quot;Total: {finalTotal}&quot;);

        return Ok(order);
    }

    private Task SendEmailAsync(string to, string subject, string body)
    {
        Console.WriteLine($&quot;Sending email to {to}: {subject}&quot;);
        return Task.CompletedTask;
    }
}
</code></pre>
<p>A cleaner approach separates each concern:</p>
<pre><code class="language-csharp">// The controller only orchestrates — it delegates to specialized services
[ApiController]
[Route(&quot;api/[controller]&quot;)]
public class OrdersController : ControllerBase
{
    private readonly IOrderService _orderService;

    public OrdersController(IOrderService orderService) =&gt; _orderService = orderService;

    [HttpPost]
    public async Task&lt;IActionResult&gt; CreateOrder(CreateOrderRequest request)
    {
        var result = await _orderService.PlaceOrderAsync(request);
        return result.IsSuccess ? Ok(result.Order) : BadRequest(result.Error);
    }
}

// The service handles orchestration of business rules
public class OrderService : IOrderService
{
    private readonly IOrderRepository _repository;
    private readonly IDiscountCalculator _discountCalculator;
    private readonly INotificationService _notificationService;

    public OrderService(
        IOrderRepository repository,
        IDiscountCalculator discountCalculator,
        INotificationService notificationService)
    {
        _repository = repository;
        _discountCalculator = discountCalculator;
        _notificationService = notificationService;
    }

    public async Task&lt;OrderResult&gt; PlaceOrderAsync(CreateOrderRequest request)
    {
        var discount = _discountCalculator.Calculate(request.Total);
        var finalTotal = request.Total * (1 - discount);

        var order = new Order { Total = finalTotal, Email = request.CustomerEmail };
        await _repository.SaveAsync(order);

        await _notificationService.SendOrderConfirmationAsync(order);

        return new OrderResult { IsSuccess = true, Order = order };
    }
}
</code></pre>
<h3 id="common-srp-mistakes">Common SRP Mistakes</h3>
<p><strong>Mistake 1: Taking it too far.</strong> Creating a class for every single method leads to an explosion of tiny classes that are individually simple but collectively hard to navigate. The principle is about cohesion — grouping things that change together — not about minimizing the number of methods per class.</p>
<p><strong>Mistake 2: Confusing &quot;one thing&quot; with &quot;one responsibility.&quot;</strong> A <code>UserValidator</code> class might have methods for validating email format, password strength, and username length. These are all part of one responsibility: validation of user input. They change for the same reason (validation rules change) and serve the same actor. This is a single responsibility, even though it involves multiple methods.</p>
<p><strong>Mistake 3: Ignoring SRP in Blazor components.</strong> A Blazor component that fetches data, transforms it, renders it, and handles multiple types of user interaction is doing too much. Extract data fetching into services, transformation into utility classes, and complex interaction logic into separate components.</p>
<h2 id="part-3-the-openclosed-principle-ocp">Part 3: The Open/Closed Principle (OCP)</h2>
<h3 id="the-definition-1">The Definition</h3>
<p>Bertrand Meyer first articulated this principle in his 1988 book <em>Object-Oriented Software Construction</em>:</p>
<blockquote>
<p>Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.</p>
</blockquote>
<p>&quot;Open for extension&quot; means you can add new behavior. &quot;Closed for modification&quot; means you do not need to change existing, working code to add that new behavior.</p>
<p>Meyer's original interpretation relied on implementation inheritance: you extend a class by inheriting from it and overriding methods, without modifying the base class. Robert C. Martin later reinterpreted the principle to emphasize polymorphism through abstractions (interfaces and abstract classes) rather than concrete inheritance.</p>
<h3 id="why-it-matters">Why It Matters</h3>
<p>Every time you modify existing code, you risk introducing bugs into functionality that was previously working. If you can add new features by writing new code rather than changing old code, you dramatically reduce the surface area for regressions.</p>
<p>Consider a payment processing system:</p>
<pre><code class="language-csharp">// Violation: adding a new payment method requires modifying this class
public class PaymentProcessor
{
    public void ProcessPayment(string paymentType, decimal amount)
    {
        if (paymentType == &quot;CreditCard&quot;)
        {
            Console.WriteLine($&quot;Processing credit card payment of {amount:C}&quot;);
            // Credit card specific logic
        }
        else if (paymentType == &quot;PayPal&quot;)
        {
            Console.WriteLine($&quot;Processing PayPal payment of {amount:C}&quot;);
            // PayPal specific logic
        }
        else if (paymentType == &quot;BankTransfer&quot;)
        {
            Console.WriteLine($&quot;Processing bank transfer of {amount:C}&quot;);
            // Bank transfer specific logic
        }
        else
        {
            throw new ArgumentException($&quot;Unknown payment type: {paymentType}&quot;);
        }
    }
}
</code></pre>
<p>This class violates OCP because every time the business adds a new payment method — cryptocurrency, Apple Pay, buy-now-pay-later — you must open this class and add another <code>else if</code> branch. Each modification risks breaking the existing branches.</p>
<h3 id="applying-ocp-with-polymorphism">Applying OCP with Polymorphism</h3>
<p>The standard solution is to define an abstraction and let each payment method implement it:</p>
<pre><code class="language-csharp">public interface IPaymentMethod
{
    string Name { get; }
    Task&lt;PaymentResult&gt; ProcessAsync(decimal amount);
}

public class CreditCardPayment : IPaymentMethod
{
    public string Name =&gt; &quot;CreditCard&quot;;

    public Task&lt;PaymentResult&gt; ProcessAsync(decimal amount)
    {
        Console.WriteLine($&quot;Charging credit card: {amount:C}&quot;);
        // Real implementation: call Stripe, Square, etc.
        return Task.FromResult(new PaymentResult { Success = true, TransactionId = Guid.NewGuid().ToString() });
    }
}

public class PayPalPayment : IPaymentMethod
{
    public string Name =&gt; &quot;PayPal&quot;;

    public Task&lt;PaymentResult&gt; ProcessAsync(decimal amount)
    {
        Console.WriteLine($&quot;Processing PayPal payment: {amount:C}&quot;);
        return Task.FromResult(new PaymentResult { Success = true, TransactionId = Guid.NewGuid().ToString() });
    }
}

public class BankTransferPayment : IPaymentMethod
{
    public string Name =&gt; &quot;BankTransfer&quot;;

    public Task&lt;PaymentResult&gt; ProcessAsync(decimal amount)
    {
        Console.WriteLine($&quot;Initiating bank transfer: {amount:C}&quot;);
        return Task.FromResult(new PaymentResult { Success = true, TransactionId = Guid.NewGuid().ToString() });
    }
}

public record PaymentResult
{
    public bool Success { get; init; }
    public string TransactionId { get; init; } = &quot;&quot;;
    public string? ErrorMessage { get; init; }
}
</code></pre>
<p>Now the processor is closed for modification:</p>
<pre><code class="language-csharp">public class PaymentProcessor
{
    private readonly IEnumerable&lt;IPaymentMethod&gt; _paymentMethods;

    public PaymentProcessor(IEnumerable&lt;IPaymentMethod&gt; paymentMethods)
    {
        _paymentMethods = paymentMethods;
    }

    public async Task&lt;PaymentResult&gt; ProcessPaymentAsync(string paymentType, decimal amount)
    {
        var method = _paymentMethods.FirstOrDefault(m =&gt;
            m.Name.Equals(paymentType, StringComparison.OrdinalIgnoreCase));

        if (method is null)
            return new PaymentResult { Success = false, ErrorMessage = $&quot;Unknown payment type: {paymentType}&quot; };

        return await method.ProcessAsync(amount);
    }
}
</code></pre>
<p>When a new payment method is needed — say, cryptocurrency — you simply write a new class:</p>
<pre><code class="language-csharp">public class CryptoPayment : IPaymentMethod
{
    public string Name =&gt; &quot;Crypto&quot;;

    public Task&lt;PaymentResult&gt; ProcessAsync(decimal amount)
    {
        Console.WriteLine($&quot;Processing crypto payment: {amount:C}&quot;);
        return Task.FromResult(new PaymentResult { Success = true, TransactionId = Guid.NewGuid().ToString() });
    }
}
</code></pre>
<p>And register it in your DI container:</p>
<pre><code class="language-csharp">builder.Services.AddTransient&lt;IPaymentMethod, CreditCardPayment&gt;();
builder.Services.AddTransient&lt;IPaymentMethod, PayPalPayment&gt;();
builder.Services.AddTransient&lt;IPaymentMethod, BankTransferPayment&gt;();
builder.Services.AddTransient&lt;IPaymentMethod, CryptoPayment&gt;(); // new — no existing code changed
</code></pre>
<p>The <code>PaymentProcessor</code> class was never modified. The existing payment method classes were never modified. You added new behavior solely by writing new code.</p>
<h3 id="ocp-with-the-strategy-pattern">OCP with the Strategy Pattern</h3>
<p>The Strategy pattern is one of the most natural ways to apply OCP. Here is a sorting example that allows pluggable comparison strategies:</p>
<pre><code class="language-csharp">public interface ISortStrategy&lt;T&gt;
{
    IEnumerable&lt;T&gt; Sort(IEnumerable&lt;T&gt; items);
}

public class AlphabeticalSortStrategy : ISortStrategy&lt;string&gt;
{
    public IEnumerable&lt;string&gt; Sort(IEnumerable&lt;string&gt; items) =&gt;
        items.OrderBy(x =&gt; x, StringComparer.OrdinalIgnoreCase);
}

public class LengthSortStrategy : ISortStrategy&lt;string&gt;
{
    public IEnumerable&lt;string&gt; Sort(IEnumerable&lt;string&gt; items) =&gt;
        items.OrderBy(x =&gt; x.Length);
}

public class ReverseSortStrategy : ISortStrategy&lt;string&gt;
{
    public IEnumerable&lt;string&gt; Sort(IEnumerable&lt;string&gt; items) =&gt;
        items.OrderByDescending(x =&gt; x, StringComparer.OrdinalIgnoreCase);
}

// The sorter is closed for modification — new strategies can be added without changing this class
public class ItemSorter&lt;T&gt;
{
    private readonly ISortStrategy&lt;T&gt; _strategy;

    public ItemSorter(ISortStrategy&lt;T&gt; strategy)
    {
        _strategy = strategy;
    }

    public IEnumerable&lt;T&gt; Sort(IEnumerable&lt;T&gt; items) =&gt; _strategy.Sort(items);
}
</code></pre>
<h3 id="ocp-in-asp.net-middleware">OCP in ASP.NET Middleware</h3>
<p>ASP.NET Core's middleware pipeline is a beautiful example of OCP in action. The pipeline itself is closed for modification — you do not change the framework source code. But it is open for extension — you add new middleware components:</p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

// Each of these extends the pipeline without modifying existing middleware
app.UseHttpsRedirection();
app.UseAuthentication();
app.UseAuthorization();
app.UseRateLimiter();

// Your custom middleware — extends the pipeline, modifies nothing
app.Use(async (context, next) =&gt;
{
    var stopwatch = Stopwatch.StartNew();
    await next(context);
    stopwatch.Stop();
    context.Response.Headers[&quot;X-Response-Time&quot;] = $&quot;{stopwatch.ElapsedMilliseconds}ms&quot;;
});

app.MapControllers();
app.Run();
</code></pre>
<h3 id="common-ocp-mistakes">Common OCP Mistakes</h3>
<p><strong>Mistake 1: Premature abstraction.</strong> Do not create interfaces and abstract classes for everything &quot;just in case&quot; you might need to extend it later. Apply OCP when you have evidence that a particular axis of change is real or likely. The first time you need a second implementation is usually the right time to extract an interface.</p>
<p><strong>Mistake 2: Thinking OCP means you can never edit a file.</strong> The principle is about design, not a literal prohibition on modifying source files. Bug fixes, refactoring for clarity, and performance improvements are all valid reasons to modify existing code. OCP is about designing your system so that adding new features does not require modifying code that already works.</p>
<p><strong>Mistake 3: Switch statements are not always violations.</strong> A switch statement over a small, stable set of values (like days of the week, or a finite set of known enum values) is not necessarily an OCP violation. The principle applies when the set of cases is expected to grow over time.</p>
<h2 id="part-4-the-liskov-substitution-principle-lsp">Part 4: The Liskov Substitution Principle (LSP)</h2>
<h3 id="the-definition-2">The Definition</h3>
<p>Barbara Liskov introduced this principle in her 1987 keynote <em>Data Abstraction and Hierarchy</em>. In a 1994 paper with Jeannette Wing, she formalized it as:</p>
<blockquote>
<p>Let φ(x) be a property provable about objects x of type T. Then φ(y) should be true for objects y of type S where S is a subtype of T.</p>
</blockquote>
<p>Robert C. Martin restated it more accessibly:</p>
<blockquote>
<p>Subtypes must be substitutable for their base types.</p>
</blockquote>
<p>In practical terms: if your code works with a reference to a base class or interface, it should continue to work correctly when you substitute any derived class or implementation — without the calling code needing to know or care about the specific subtype.</p>
<h3 id="the-classic-violation-rectangle-and-square">The Classic Violation: Rectangle and Square</h3>
<p>This is the most famous example of an LSP violation. In geometry, a square &quot;is a&quot; rectangle — it is a rectangle with equal sides. So you might model this with inheritance:</p>
<pre><code class="language-csharp">public class Rectangle
{
    public virtual int Width { get; set; }
    public virtual int Height { get; set; }

    public int CalculateArea() =&gt; Width * Height;
}

public class Square : Rectangle
{
    public override int Width
    {
        get =&gt; base.Width;
        set
        {
            base.Width = value;
            base.Height = value; // Keep sides equal
        }
    }

    public override int Height
    {
        get =&gt; base.Height;
        set
        {
            base.Height = value;
            base.Width = value; // Keep sides equal
        }
    }
}
</code></pre>
<p>This compiles and even seems to work. But consider a function that operates on rectangles:</p>
<pre><code class="language-csharp">public void ResizeRectangle(Rectangle rect)
{
    rect.Width = 10;
    rect.Height = 5;

    // For any Rectangle, we expect the area to be 10 * 5 = 50
    Debug.Assert(rect.CalculateArea() == 50);
}
</code></pre>
<p>Pass a <code>Rectangle</code> and the assertion holds. Pass a <code>Square</code> and it fails — because setting <code>Height = 5</code> also sets <code>Width = 5</code>, so the area is 25, not 50.</p>
<p>The <code>Square</code> class cannot be substituted for <code>Rectangle</code> without breaking the program's correctness. This is an LSP violation.</p>
<h3 id="the-fix">The Fix</h3>
<p>The solution is to rethink the inheritance hierarchy. In terms of behavior, a square is not a rectangle because it does not honor the rectangle's contract that width and height can be set independently. A better design uses composition or separate types:</p>
<pre><code class="language-csharp">public interface IShape
{
    int CalculateArea();
}

public class Rectangle : IShape
{
    public int Width { get; }
    public int Height { get; }

    public Rectangle(int width, int height)
    {
        Width = width;
        Height = height;
    }

    public int CalculateArea() =&gt; Width * Height;
}

public class Square : IShape
{
    public int Side { get; }

    public Square(int side)
    {
        Side = side;
    }

    public int CalculateArea() =&gt; Side * Side;
}
</code></pre>
<p>Now <code>Rectangle</code> and <code>Square</code> are siblings under <code>IShape</code>, not parent and child. No code that works with <code>IShape</code> will be surprised by either implementation because neither makes promises it cannot keep.</p>
<h3 id="lsp-and-design-by-contract">LSP and Design by Contract</h3>
<p>The Liskov Substitution Principle is closely related to Bertrand Meyer's Design by Contract, which he introduced in his 1988 book <em>Object-Oriented Software Construction</em> and implemented in the Eiffel language. The rules are:</p>
<ol>
<li><strong>Preconditions cannot be strengthened in a subtype.</strong> If the base class accepts any positive integer, the subtype cannot demand only even numbers.</li>
<li><strong>Postconditions cannot be weakened in a subtype.</strong> If the base class guarantees the result is non-null, the subtype cannot return null.</li>
<li><strong>Invariants must be preserved.</strong> If the base class guarantees that a balance is never negative, the subtype must maintain that guarantee.</li>
</ol>
<p>Here is a practical C# example:</p>
<pre><code class="language-csharp">public abstract class Account
{
    public decimal Balance { get; protected set; }

    // Precondition: amount &gt; 0
    // Postcondition: Balance decreases by amount
    // Invariant: Balance &gt;= 0
    public virtual void Withdraw(decimal amount)
    {
        if (amount &lt;= 0)
            throw new ArgumentException(&quot;Amount must be positive&quot;);

        if (Balance - amount &lt; 0)
            throw new InvalidOperationException(&quot;Insufficient funds&quot;);

        Balance -= amount;
    }
}

public class SavingsAccount : Account
{
    // CORRECT: Does not strengthen the precondition.
    // Adds a postcondition (minimum balance check) that is stricter,
    // which is allowed because it does not weaken the base class guarantee.
    public override void Withdraw(decimal amount)
    {
        if (amount &lt;= 0)
            throw new ArgumentException(&quot;Amount must be positive&quot;);

        if (Balance - amount &lt; 100) // Minimum balance of 100
            throw new InvalidOperationException(&quot;Must maintain minimum balance of 100&quot;);

        Balance -= amount;
    }
}

public class FixedDepositAccount : Account
{
    // VIOLATION: This strengthens the precondition by adding a maturity date check.
    // Code that works with Account.Withdraw() will be surprised when this throws
    // for a reason it did not expect.
    public DateTime MaturityDate { get; set; }

    public override void Withdraw(decimal amount)
    {
        if (DateTime.UtcNow &lt; MaturityDate)
            throw new InvalidOperationException(&quot;Cannot withdraw before maturity&quot;);

        base.Withdraw(amount);
    }
}
</code></pre>
<p>The <code>FixedDepositAccount</code> violates LSP because it introduces a new precondition — the current date must be past the maturity date — that callers working with the base <code>Account</code> type do not expect. A better design would either not inherit from <code>Account</code> or use a separate interface that explicitly models the maturity constraint.</p>
<h3 id="real-world-lsp-violations-in.net">Real-World LSP Violations in .NET</h3>
<p><strong>Violating LSP with collections:</strong> A common trap is returning a <code>ReadOnlyCollection&lt;T&gt;</code> from a property typed as <code>IList&lt;T&gt;</code>. The <code>IList&lt;T&gt;</code> interface includes <code>Add</code>, <code>Remove</code>, and <code>Insert</code> methods, but <code>ReadOnlyCollection&lt;T&gt;</code> throws <code>NotSupportedException</code> when you call them. Code that expects an <code>IList&lt;T&gt;</code> to support mutation will break.</p>
<pre><code class="language-csharp">// Violation: IList&lt;T&gt; promises mutation, but this implementation does not deliver
public class UserService
{
    private readonly List&lt;string&gt; _roles = [&quot;admin&quot;, &quot;editor&quot;, &quot;viewer&quot;];

    // This return type promises mutability but delivers read-only
    public IList&lt;string&gt; GetRoles() =&gt; _roles.AsReadOnly();
}

// Better: use a type that accurately describes the contract
public class UserServiceFixed
{
    private readonly List&lt;string&gt; _roles = [&quot;admin&quot;, &quot;editor&quot;, &quot;viewer&quot;];

    public IReadOnlyList&lt;string&gt; GetRoles() =&gt; _roles.AsReadOnly();
}
</code></pre>
<p><strong>Violating LSP with exceptions:</strong> If a base class method does not document that it throws a specific exception, a derived class should not introduce that exception. Callers who are not prepared to catch it will be surprised.</p>
<pre><code class="language-csharp">public interface IFileReader
{
    string ReadAll(string path);
}

// Good: throws IOException, which is expected for file operations
public class LocalFileReader : IFileReader
{
    public string ReadAll(string path) =&gt; File.ReadAllText(path);
}

// Problematic: throws HttpRequestException, which callers of IFileReader do not expect
public class RemoteFileReader : IFileReader
{
    private readonly HttpClient _http;

    public RemoteFileReader(HttpClient http) =&gt; _http = http;

    public string ReadAll(string path)
    {
        // This can throw HttpRequestException — a surprise for callers expecting file I/O errors
        return _http.GetStringAsync(path).GetAwaiter().GetResult();
    }
}
</code></pre>
<p>The fix is to catch the transport-specific exceptions and wrap them in something the caller expects:</p>
<pre><code class="language-csharp">public class RemoteFileReaderFixed : IFileReader
{
    private readonly HttpClient _http;

    public RemoteFileReaderFixed(HttpClient http) =&gt; _http = http;

    public string ReadAll(string path)
    {
        try
        {
            return _http.GetStringAsync(path).GetAwaiter().GetResult();
        }
        catch (HttpRequestException ex)
        {
            throw new IOException($&quot;Failed to read remote file: {path}&quot;, ex);
        }
    }
}
</code></pre>
<h3 id="how-to-test-for-lsp-compliance">How to Test for LSP Compliance</h3>
<p>Write tests that exercise the base type contract, then run those same tests against every subtype:</p>
<pre><code class="language-csharp">public abstract class ShapeTests&lt;T&gt; where T : IShape
{
    protected abstract T CreateShape();

    [Fact]
    public void Area_ShouldBeNonNegative()
    {
        var shape = CreateShape();
        Assert.True(shape.CalculateArea() &gt;= 0);
    }
}

public class RectangleTests : ShapeTests&lt;Rectangle&gt;
{
    protected override Rectangle CreateShape() =&gt; new(5, 3);

    [Fact]
    public void Area_ShouldBeWidthTimesHeight()
    {
        var rect = new Rectangle(5, 3);
        Assert.Equal(15, rect.CalculateArea());
    }
}

public class SquareTests : ShapeTests&lt;Square&gt;
{
    protected override Square CreateShape() =&gt; new(4);

    [Fact]
    public void Area_ShouldBeSideSquared()
    {
        var square = new Square(4);
        Assert.Equal(16, square.CalculateArea());
    }
}
</code></pre>
<p>If any derived class fails a test written for the base type, you have an LSP violation.</p>
<h2 id="part-5-the-interface-segregation-principle-isp">Part 5: The Interface Segregation Principle (ISP)</h2>
<h3 id="the-definition-3">The Definition</h3>
<blockquote>
<p>Clients should not be forced to depend upon interfaces that they do not use.</p>
</blockquote>
<p>Robert C. Martin developed this principle while consulting for Xerox. The Xerox printer system had a single &quot;Job&quot; interface with methods for printing, stapling, collating, faxing, and scanning. Every client — even one that only needed to print — was forced to depend on the entire interface. Changes to the faxing methods forced recompilation of printing clients, even though they had nothing to do with faxing.</p>
<h3 id="a-violation">A Violation</h3>
<p>Consider a worker interface in a factory management system:</p>
<pre><code class="language-csharp">public interface IWorker
{
    void Work();
    void Eat();
    void Sleep();
    void AttendMeeting();
    void WriteReport();
}

public class HumanWorker : IWorker
{
    public void Work() =&gt; Console.WriteLine(&quot;Working...&quot;);
    public void Eat() =&gt; Console.WriteLine(&quot;Eating lunch...&quot;);
    public void Sleep() =&gt; Console.WriteLine(&quot;Sleeping...&quot;);
    public void AttendMeeting() =&gt; Console.WriteLine(&quot;In a meeting...&quot;);
    public void WriteReport() =&gt; Console.WriteLine(&quot;Writing report...&quot;);
}

public class RobotWorker : IWorker
{
    public void Work() =&gt; Console.WriteLine(&quot;Robot working...&quot;);

    // Robots do not eat
    public void Eat() =&gt; throw new NotSupportedException(&quot;Robots don't eat&quot;);

    // Robots do not sleep
    public void Sleep() =&gt; throw new NotSupportedException(&quot;Robots don't sleep&quot;);

    // Robots do not attend meetings
    public void AttendMeeting() =&gt; throw new NotSupportedException(&quot;Robots don't attend meetings&quot;);

    // Robots do not write reports
    public void WriteReport() =&gt; throw new NotSupportedException(&quot;Robots don't write reports&quot;);
}
</code></pre>
<p>The <code>RobotWorker</code> class is forced to implement five methods, four of which it does not support. This is an ISP violation — and it is also an LSP violation, since substituting a <code>RobotWorker</code> for a <code>HumanWorker</code> will throw exceptions that callers do not expect.</p>
<h3 id="applying-isp">Applying ISP</h3>
<p>Split the interface into smaller, focused interfaces that each describe a single capability:</p>
<pre><code class="language-csharp">public interface IWorkable
{
    void Work();
}

public interface IFeedable
{
    void Eat();
}

public interface ISleepable
{
    void Sleep();
}

public interface IMeetingAttendee
{
    void AttendMeeting();
}

public interface IReportWriter
{
    void WriteReport();
}

public class HumanWorker : IWorkable, IFeedable, ISleepable, IMeetingAttendee, IReportWriter
{
    public void Work() =&gt; Console.WriteLine(&quot;Working...&quot;);
    public void Eat() =&gt; Console.WriteLine(&quot;Eating lunch...&quot;);
    public void Sleep() =&gt; Console.WriteLine(&quot;Sleeping...&quot;);
    public void AttendMeeting() =&gt; Console.WriteLine(&quot;In a meeting...&quot;);
    public void WriteReport() =&gt; Console.WriteLine(&quot;Writing report...&quot;);
}

public class RobotWorker : IWorkable
{
    public void Work() =&gt; Console.WriteLine(&quot;Robot working efficiently...&quot;);
}
</code></pre>
<p>Now <code>RobotWorker</code> only implements what it actually supports. Code that only needs a worker can accept <code>IWorkable</code>. Code that needs meeting attendance can accept <code>IMeetingAttendee</code>. No client is forced to depend on capabilities it does not use.</p>
<h3 id="a-realistic.net-example-repository-interfaces">A Realistic .NET Example: Repository Interfaces</h3>
<p>A common ISP violation in .NET projects is the &quot;god repository&quot; interface:</p>
<pre><code class="language-csharp">// Violation: every consumer depends on all methods, even if they only need one
public interface IRepository&lt;T&gt;
{
    Task&lt;T?&gt; GetByIdAsync(int id);
    Task&lt;IReadOnlyList&lt;T&gt;&gt; GetAllAsync();
    Task&lt;IReadOnlyList&lt;T&gt;&gt; FindAsync(Expression&lt;Func&lt;T, bool&gt;&gt; predicate);
    Task AddAsync(T entity);
    Task UpdateAsync(T entity);
    Task DeleteAsync(int id);
    Task&lt;int&gt; CountAsync();
    Task&lt;bool&gt; ExistsAsync(int id);
    Task BulkInsertAsync(IEnumerable&lt;T&gt; entities);
    Task ExecuteRawSqlAsync(string sql);
}
</code></pre>
<p>A read-only reporting service should not need to depend on <code>AddAsync</code>, <code>DeleteAsync</code>, or <code>ExecuteRawSqlAsync</code>. Split it:</p>
<pre><code class="language-csharp">public interface IReadRepository&lt;T&gt;
{
    Task&lt;T?&gt; GetByIdAsync(int id);
    Task&lt;IReadOnlyList&lt;T&gt;&gt; GetAllAsync();
    Task&lt;IReadOnlyList&lt;T&gt;&gt; FindAsync(Expression&lt;Func&lt;T, bool&gt;&gt; predicate);
    Task&lt;int&gt; CountAsync();
    Task&lt;bool&gt; ExistsAsync(int id);
}

public interface IWriteRepository&lt;T&gt;
{
    Task AddAsync(T entity);
    Task UpdateAsync(T entity);
    Task DeleteAsync(int id);
}

public interface IBulkRepository&lt;T&gt;
{
    Task BulkInsertAsync(IEnumerable&lt;T&gt; entities);
}

public interface IRawSqlRepository
{
    Task ExecuteRawSqlAsync(string sql);
}

// The full repository composes all the interfaces
public class ProductRepository : IReadRepository&lt;Product&gt;, IWriteRepository&lt;Product&gt;, IBulkRepository&lt;Product&gt;
{
    // Implementation using EF Core, Dapper, or raw ADO.NET
    public Task&lt;Product?&gt; GetByIdAsync(int id) =&gt; throw new NotImplementedException();
    public Task&lt;IReadOnlyList&lt;Product&gt;&gt; GetAllAsync() =&gt; throw new NotImplementedException();
    public Task&lt;IReadOnlyList&lt;Product&gt;&gt; FindAsync(Expression&lt;Func&lt;Product, bool&gt;&gt; predicate) =&gt; throw new NotImplementedException();
    public Task&lt;int&gt; CountAsync() =&gt; throw new NotImplementedException();
    public Task&lt;bool&gt; ExistsAsync(int id) =&gt; throw new NotImplementedException();
    public Task AddAsync(Product entity) =&gt; throw new NotImplementedException();
    public Task UpdateAsync(Product entity) =&gt; throw new NotImplementedException();
    public Task DeleteAsync(int id) =&gt; throw new NotImplementedException();
    public Task BulkInsertAsync(IEnumerable&lt;Product&gt; entities) =&gt; throw new NotImplementedException();
}

// A reporting service only depends on what it needs
public class ProductReportService
{
    private readonly IReadRepository&lt;Product&gt; _repository;

    public ProductReportService(IReadRepository&lt;Product&gt; repository)
    {
        _repository = repository;
    }

    public async Task&lt;int&gt; GetProductCountAsync()
    {
        return await _repository.CountAsync();
    }
}
</code></pre>
<h3 id="isp-in-blazor-components">ISP in Blazor Components</h3>
<p>ISP also applies to the parameters and services that Blazor components depend on. A component that accepts a massive parameter object when it only needs a few fields is violating ISP at the component level:</p>
<pre><code class="language-csharp">// Violation: the component depends on the entire Order object
// but only displays the customer name and total
@code {
    [Parameter] public Order FullOrder { get; set; } = default!;
}

&lt;p&gt;Customer: @FullOrder.Customer.FullName&lt;/p&gt;
&lt;p&gt;Total: @FullOrder.Total.ToString(&quot;C&quot;)&lt;/p&gt;
</code></pre>
<p>Better: pass only what the component needs, or define a focused view model:</p>
<pre><code class="language-csharp">@code {
    [Parameter] public string CustomerName { get; set; } = &quot;&quot;;
    [Parameter] public decimal Total { get; set; }
}

&lt;p&gt;Customer: @CustomerName&lt;/p&gt;
&lt;p&gt;Total: @Total.ToString(&quot;C&quot;)&lt;/p&gt;
</code></pre>
<h3 id="common-isp-mistakes">Common ISP Mistakes</h3>
<p><strong>Mistake 1: Going too granular.</strong> An interface with a single method is sometimes appropriate (think <code>IDisposable</code>, <code>IComparable&lt;T&gt;</code>), but splitting every interface down to one method per interface can make the system harder to understand. Group methods that are almost always used together.</p>
<p><strong>Mistake 2: Marker interfaces with no methods.</strong> An empty interface used only for type identification (<code>public interface IEntity { }</code>) is not necessarily an ISP violation — it is a different pattern entirely — but be cautious about using them for anything beyond tagging.</p>
<p><strong>Mistake 3: Ignoring ISP in DI registration.</strong> Even if you split your interfaces correctly, registering them all as the same concrete type in DI means that any consumer can resolve the full implementation. Use specific interface registrations.</p>
<h2 id="part-6-the-dependency-inversion-principle-dip">Part 6: The Dependency Inversion Principle (DIP)</h2>
<h3 id="the-definition-4">The Definition</h3>
<p>Robert C. Martin stated the Dependency Inversion Principle as two rules:</p>
<blockquote>
<ol>
<li>High-level modules should not depend on low-level modules. Both should depend on abstractions.</li>
<li>Abstractions should not depend on details. Details should depend on abstractions.</li>
</ol>
</blockquote>
<p>&quot;High-level modules&quot; are the parts of your system that embody business rules and policy. &quot;Low-level modules&quot; are the implementation details — file I/O, database access, HTTP clients, third-party APIs. The principle says that the direction of dependency should be inverted: instead of high-level code depending on low-level code, both should depend on an abstraction that lives alongside the high-level code.</p>
<h3 id="why-inversion">Why &quot;Inversion&quot;?</h3>
<p>In traditional procedural programming, the dependency structure follows the call graph: high-level code calls low-level code, and therefore depends on it. If the database layer changes, the business logic layer must change too.</p>
<p>Dependency Inversion flips this. The high-level module defines an interface that describes what it needs. The low-level module implements that interface. The dependency arrow now points from the low-level module toward the high-level module's abstraction, not the other way around.</p>
<h3 id="a-violation-1">A Violation</h3>
<pre><code class="language-csharp">// High-level module directly depends on low-level module
public class OrderProcessor
{
    private readonly SqlServerDatabase _database;
    private readonly SmtpEmailSender _emailSender;
    private readonly FileSystemLogger _logger;

    public OrderProcessor()
    {
        _database = new SqlServerDatabase(&quot;Server=localhost;Database=Orders;...&quot;);
        _emailSender = new SmtpEmailSender(&quot;smtp.company.com&quot;, 587);
        _logger = new FileSystemLogger(&quot;/var/log/orders.log&quot;);
    }

    public void Process(Order order)
    {
        _logger.Log($&quot;Processing order {order.Id}&quot;);
        _database.Save(order);
        _emailSender.Send(order.CustomerEmail, &quot;Order Confirmed&quot;, $&quot;Order {order.Id} is confirmed&quot;);
        _logger.Log($&quot;Order {order.Id} processed&quot;);
    }
}
</code></pre>
<p>This code has several problems:</p>
<ul>
<li><code>OrderProcessor</code> directly instantiates its dependencies, making it impossible to unit test without a real SQL Server, SMTP server, and file system.</li>
<li>Switching from SQL Server to PostgreSQL requires modifying <code>OrderProcessor</code>.</li>
<li>Switching from SMTP to a queue-based email service requires modifying <code>OrderProcessor</code>.</li>
<li>The high-level business logic is tightly coupled to low-level infrastructure.</li>
</ul>
<h3 id="applying-dip">Applying DIP</h3>
<p>Define abstractions for each dependency:</p>
<pre><code class="language-csharp">// Abstractions — these live alongside the high-level module
public interface IOrderRepository
{
    Task SaveAsync(Order order);
    Task&lt;Order?&gt; GetByIdAsync(int id);
}

public interface INotificationService
{
    Task SendAsync(string to, string subject, string body);
}

public interface IAppLogger
{
    void LogInformation(string message);
    void LogError(string message, Exception? ex = null);
}
</code></pre>
<p>The high-level module depends only on abstractions:</p>
<pre><code class="language-csharp">public class OrderProcessor
{
    private readonly IOrderRepository _repository;
    private readonly INotificationService _notifications;
    private readonly IAppLogger _logger;

    public OrderProcessor(
        IOrderRepository repository,
        INotificationService notifications,
        IAppLogger logger)
    {
        _repository = repository;
        _notifications = notifications;
        _logger = logger;
    }

    public async Task ProcessAsync(Order order)
    {
        _logger.LogInformation($&quot;Processing order {order.Id}&quot;);

        await _repository.SaveAsync(order);
        await _notifications.SendAsync(
            order.CustomerEmail,
            &quot;Order Confirmed&quot;,
            $&quot;Your order {order.Id} has been confirmed.&quot;);

        _logger.LogInformation($&quot;Order {order.Id} processed successfully&quot;);
    }
}
</code></pre>
<p>Low-level modules implement the abstractions:</p>
<pre><code class="language-csharp">// Low-level module: SQL Server implementation
public class SqlServerOrderRepository : IOrderRepository
{
    private readonly string _connectionString;

    public SqlServerOrderRepository(string connectionString)
    {
        _connectionString = connectionString;
    }

    public async Task SaveAsync(Order order)
    {
        // Use EF Core, Dapper, or ADO.NET to save
        await Task.CompletedTask;
    }

    public async Task&lt;Order?&gt; GetByIdAsync(int id)
    {
        await Task.CompletedTask;
        return null; // simplified
    }
}

// Low-level module: PostgreSQL implementation
public class PostgresOrderRepository : IOrderRepository
{
    private readonly string _connectionString;

    public PostgresOrderRepository(string connectionString)
    {
        _connectionString = connectionString;
    }

    public async Task SaveAsync(Order order)
    {
        // Npgsql-based implementation
        await Task.CompletedTask;
    }

    public async Task&lt;Order?&gt; GetByIdAsync(int id)
    {
        await Task.CompletedTask;
        return null;
    }
}

// Low-level module: SMTP email
public class SmtpNotificationService : INotificationService
{
    private readonly string _smtpHost;
    private readonly int _port;

    public SmtpNotificationService(string smtpHost, int port)
    {
        _smtpHost = smtpHost;
        _port = port;
    }

    public async Task SendAsync(string to, string subject, string body)
    {
        Console.WriteLine($&quot;Sending email via SMTP to {to}: {subject}&quot;);
        await Task.CompletedTask;
    }
}

// Low-level module: Queue-based notifications
public class QueueNotificationService : INotificationService
{
    public async Task SendAsync(string to, string subject, string body)
    {
        Console.WriteLine($&quot;Queuing notification for {to}: {subject}&quot;);
        await Task.CompletedTask;
    }
}
</code></pre>
<p>Wire it up in the DI container:</p>
<pre><code class="language-csharp">// In Program.cs or Startup.cs
builder.Services.AddScoped&lt;IOrderRepository, PostgresOrderRepository&gt;(
    sp =&gt; new PostgresOrderRepository(builder.Configuration.GetConnectionString(&quot;Orders&quot;)!));
builder.Services.AddScoped&lt;INotificationService, QueueNotificationService&gt;();
builder.Services.AddScoped&lt;IAppLogger, SerilogAppLogger&gt;();
builder.Services.AddScoped&lt;OrderProcessor&gt;();
</code></pre>
<p>Switching from SQL Server to PostgreSQL is now a one-line change in DI registration. No business logic code is modified.</p>
<h3 id="dip-and-testability">DIP and Testability</h3>
<p>The single greatest practical benefit of DIP is testability. With abstractions injected, you can substitute test doubles:</p>
<pre><code class="language-csharp">public class OrderProcessorTests
{
    [Fact]
    public async Task ProcessAsync_SavesOrderAndSendsNotification()
    {
        // Arrange
        var savedOrders = new List&lt;Order&gt;();
        var sentNotifications = new List&lt;(string To, string Subject, string Body)&gt;();

        var mockRepo = new InMemoryOrderRepository(savedOrders);
        var mockNotifier = new FakeNotificationService(sentNotifications);
        var mockLogger = new NullAppLogger();

        var processor = new OrderProcessor(mockRepo, mockNotifier, mockLogger);
        var order = new Order { Id = 1, CustomerEmail = &quot;test@example.com&quot; };

        // Act
        await processor.ProcessAsync(order);

        // Assert
        Assert.Single(savedOrders);
        Assert.Equal(1, savedOrders[0].Id);
        Assert.Single(sentNotifications);
        Assert.Equal(&quot;test@example.com&quot;, sentNotifications[0].To);
    }
}

// Simple test doubles — no mocking framework needed
public class InMemoryOrderRepository : IOrderRepository
{
    private readonly List&lt;Order&gt; _orders;

    public InMemoryOrderRepository(List&lt;Order&gt; orders) =&gt; _orders = orders;

    public Task SaveAsync(Order order)
    {
        _orders.Add(order);
        return Task.CompletedTask;
    }

    public Task&lt;Order?&gt; GetByIdAsync(int id) =&gt;
        Task.FromResult(_orders.FirstOrDefault(o =&gt; o.Id == id));
}

public class FakeNotificationService : INotificationService
{
    private readonly List&lt;(string To, string Subject, string Body)&gt; _sent;

    public FakeNotificationService(List&lt;(string To, string Subject, string Body)&gt; sent) =&gt; _sent = sent;

    public Task SendAsync(string to, string subject, string body)
    {
        _sent.Add((to, subject, body));
        return Task.CompletedTask;
    }
}

public class NullAppLogger : IAppLogger
{
    public void LogInformation(string message) { }
    public void LogError(string message, Exception? ex = null) { }
}
</code></pre>
<p>These tests run in milliseconds, require no infrastructure, and will never fail because a database is down or an SMTP server is unreachable.</p>
<h3 id="dip-in-blazor-webassembly">DIP in Blazor WebAssembly</h3>
<p>In Blazor WebAssembly, DIP is essential for components that consume services:</p>
<pre><code class="language-csharp">// The Blazor component depends on an abstraction
@inject IBlogService BlogService
@inject ILogger&lt;Blog&gt; Logger

@code {
    private BlogPostMetadata[]? posts;

    protected override async Task OnInitializedAsync()
    {
        posts = await BlogService.GetPostsAsync();
    }
}
</code></pre>
<p>The concrete <code>BlogService</code> (which uses <code>HttpClient</code> to fetch JSON) is registered in DI. During testing, you register a different implementation that returns canned data. The component never knows the difference.</p>
<h3 id="dip-vs.dependency-injection">DIP vs. Dependency Injection</h3>
<p>A common confusion: Dependency Inversion is a design principle about the direction of dependencies. Dependency Injection is a technique for providing dependencies to a class (typically through constructor parameters). DI frameworks (like ASP.NET Core's built-in container) are tools that automate dependency injection.</p>
<p>You can apply Dependency Inversion without a DI container — just pass interfaces through constructors manually. And you can use a DI container without actually inverting dependencies (by injecting concrete classes instead of abstractions). They are related but distinct concepts:</p>
<ul>
<li><strong>Dependency Inversion</strong>: A principle about which direction dependencies should point.</li>
<li><strong>Dependency Injection</strong>: A pattern for supplying dependencies from outside a class.</li>
<li><strong>IoC Container</strong>: A framework that automates dependency injection.</li>
</ul>
<h3 id="common-dip-mistakes">Common DIP Mistakes</h3>
<p><strong>Mistake 1: Abstracting everything.</strong> Not every class needs an interface. If a class is a simple data container (<code>record Product(string Name, decimal Price)</code>), wrapping it in an interface adds complexity with no benefit. Apply DIP to the boundaries — the seams where high-level policy meets low-level infrastructure.</p>
<p><strong>Mistake 2: Leaky abstractions.</strong> An interface that mirrors the API of a specific implementation (like <code>ISqlServerDatabase</code> with methods named <code>ExecuteStoredProcedure</code> and <code>UseTempTable</code>) is not a real abstraction. It is just an indirection. True abstractions describe what the high-level module needs, not how the low-level module works.</p>
<p><strong>Mistake 3: Putting abstractions in the wrong project.</strong> The interface should live in the same project or layer as the high-level module that depends on it, not alongside the low-level implementation. If <code>IOrderRepository</code> lives in your data access project, the dependency arrow still points from business logic down to data access — even though you are coding against an interface.</p>
<h2 id="part-7-how-solid-principles-interact">Part 7: How SOLID Principles Interact</h2>
<p>The five principles are not independent — they reinforce each other. Understanding their interactions helps you apply them holistically rather than as isolated rules.</p>
<h3 id="srp-ocp">SRP + OCP</h3>
<p>If a class has a single responsibility, it is easier to keep it closed for modification. A class that does one thing has fewer reasons to change. When new behavior is needed, you add a new class rather than modifying the existing one.</p>
<h3 id="ocp-dip">OCP + DIP</h3>
<p>Dependency Inversion is often the mechanism by which you achieve OCP. By depending on abstractions (DIP), you can substitute different concrete implementations (OCP) without modifying the code that depends on the abstraction. The <code>PaymentProcessor</code> example from Part 3 works precisely because it depends on <code>IPaymentMethod</code> (DIP) rather than concrete payment classes.</p>
<h3 id="lsp-isp">LSP + ISP</h3>
<p>Interface Segregation helps prevent LSP violations. When interfaces are small and focused, implementations are less likely to throw <code>NotSupportedException</code> or exhibit degenerate behavior. The <code>RobotWorker</code> that threw exceptions was both an ISP violation (fat interface) and an LSP violation (could not be substituted for <code>IWorker</code> without breaking things).</p>
<h3 id="all-five-together-a-complete-example">All Five Together: A Complete Example</h3>
<p>Let us design a notification system that demonstrates all five principles working in concert:</p>
<pre><code class="language-csharp">// ISP: Small, focused interfaces for different capabilities
public interface INotificationSender
{
    string Channel { get; } // &quot;email&quot;, &quot;sms&quot;, &quot;push&quot;
    Task SendAsync(NotificationMessage message);
}

public interface INotificationTemplateEngine
{
    string Render(string templateName, Dictionary&lt;string, string&gt; variables);
}

public interface INotificationLogger
{
    Task LogAsync(NotificationMessage message, bool success, string? errorMessage = null);
}

// SRP: Each class has one reason to change
public record NotificationMessage(
    string Recipient,
    string Subject,
    string Body,
    string Channel);

public class EmailSender : INotificationSender
{
    public string Channel =&gt; &quot;email&quot;;

    public async Task SendAsync(NotificationMessage message)
    {
        Console.WriteLine($&quot;Sending email to {message.Recipient}: {message.Subject}&quot;);
        await Task.CompletedTask;
    }
}

public class SmsSender : INotificationSender
{
    public string Channel =&gt; &quot;sms&quot;;

    public async Task SendAsync(NotificationMessage message)
    {
        Console.WriteLine($&quot;Sending SMS to {message.Recipient}: {message.Body}&quot;);
        await Task.CompletedTask;
    }
}

public class PushNotificationSender : INotificationSender
{
    public string Channel =&gt; &quot;push&quot;;

    public async Task SendAsync(NotificationMessage message)
    {
        Console.WriteLine($&quot;Sending push notification to {message.Recipient}: {message.Subject}&quot;);
        await Task.CompletedTask;
    }
}

// OCP: Adding a new channel requires writing a new class, not modifying existing ones
// LSP: Every INotificationSender implementation is fully substitutable
// DIP: NotificationService depends on abstractions, not concrete senders

public class NotificationService
{
    private readonly IEnumerable&lt;INotificationSender&gt; _senders;
    private readonly INotificationTemplateEngine _templateEngine;
    private readonly INotificationLogger _logger;

    public NotificationService(
        IEnumerable&lt;INotificationSender&gt; senders,
        INotificationTemplateEngine templateEngine,
        INotificationLogger logger)
    {
        _senders = senders;
        _templateEngine = templateEngine;
        _logger = logger;
    }

    public async Task NotifyAsync(
        string recipient,
        string channel,
        string templateName,
        Dictionary&lt;string, string&gt; variables)
    {
        var body = _templateEngine.Render(templateName, variables);
        var message = new NotificationMessage(recipient, templateName, body, channel);

        var sender = _senders.FirstOrDefault(s =&gt;
            s.Channel.Equals(channel, StringComparison.OrdinalIgnoreCase));

        if (sender is null)
        {
            await _logger.LogAsync(message, false, $&quot;No sender found for channel: {channel}&quot;);
            return;
        }

        try
        {
            await sender.SendAsync(message);
            await _logger.LogAsync(message, true);
        }
        catch (Exception ex)
        {
            await _logger.LogAsync(message, false, ex.Message);
            throw;
        }
    }
}
</code></pre>
<p>Registration in DI:</p>
<pre><code class="language-csharp">builder.Services.AddTransient&lt;INotificationSender, EmailSender&gt;();
builder.Services.AddTransient&lt;INotificationSender, SmsSender&gt;();
builder.Services.AddTransient&lt;INotificationSender, PushNotificationSender&gt;();
builder.Services.AddTransient&lt;INotificationTemplateEngine, HandlebarsTemplateEngine&gt;();
builder.Services.AddTransient&lt;INotificationLogger, DatabaseNotificationLogger&gt;();
builder.Services.AddTransient&lt;NotificationService&gt;();
</code></pre>
<p>Adding a new channel (say, Slack):</p>
<pre><code class="language-csharp">public class SlackSender : INotificationSender
{
    public string Channel =&gt; &quot;slack&quot;;

    public async Task SendAsync(NotificationMessage message)
    {
        Console.WriteLine($&quot;Posting to Slack for {message.Recipient}: {message.Body}&quot;);
        await Task.CompletedTask;
    }
}

// One line added to DI — nothing else changes
builder.Services.AddTransient&lt;INotificationSender, SlackSender&gt;();
</code></pre>
<h2 id="part-8-common-pitfalls-and-anti-patterns">Part 8: Common Pitfalls and Anti-Patterns</h2>
<h3 id="over-engineering-solid-as-a-hammer">Over-Engineering: SOLID as a Hammer</h3>
<p>The most common pitfall is applying SOLID reflexively to every class, regardless of whether the complexity is warranted. If you have a utility class that formats dates and it will never need to be extended or substituted, wrapping it in an interface and injecting it through DI is unnecessary ceremony.</p>
<p><strong>Guideline</strong>: Apply SOLID at the boundaries — where your application logic meets external systems (databases, APIs, file systems, message queues). For internal utility code that is unlikely to change, prefer simplicity.</p>
<h3 id="the-interface-per-class-anti-pattern">The &quot;Interface Per Class&quot; Anti-Pattern</h3>
<p>Creating an interface for every class, even when only one implementation will ever exist, leads to what some developers call &quot;interface pollution.&quot; You end up with pairs of files — <code>IFooService.cs</code> and <code>FooService.cs</code> — where the interface is an exact copy of the class's public surface.</p>
<p><strong>Guideline</strong>: Create an interface when you need polymorphism — when you will have multiple implementations, or when you need to substitute a test double. If neither applies, a concrete class is fine.</p>
<h3 id="anemic-domain-models">Anemic Domain Models</h3>
<p>Overly zealous application of SRP can lead to anemic domain models — classes that are pure data containers with no behavior, while all the behavior lives in service classes. This is not inherently wrong, but it can result in procedural code dressed up in object-oriented clothing.</p>
<p><strong>Guideline</strong>: Some behavior naturally belongs on the domain entity itself. A <code>Money</code> class that knows how to add and subtract currencies is not violating SRP — arithmetic on money is that class's single responsibility.</p>
<h3 id="circular-dependencies">Circular Dependencies</h3>
<p>Applying DIP incorrectly can create circular dependencies. If module A defines an interface that module B implements, but module B also defines an interface that module A implements, you have a cycle.</p>
<p><strong>Guideline</strong>: Identify which module is the higher-level one (the one with the policy) and let that module own the abstractions. The lower-level module depends on the higher-level module's abstractions, never the reverse.</p>
<h3 id="analysis-paralysis">Analysis Paralysis</h3>
<p>SOLID can lead to analysis paralysis — spending more time designing abstractions than writing code that solves the actual problem. Remember that these are principles, not laws. They exist to serve your codebase, not the other way around.</p>
<p><strong>Guideline</strong>: Start simple. Write the straightforward solution. When you feel the pain of a SOLID violation — a class that keeps growing, a change that breaks unrelated tests, a type that cannot be substituted — refactor then. This approach is sometimes called &quot;refactoring toward SOLID.&quot;</p>
<h2 id="part-9-solid-in-the-context-of-modern.net">Part 9: SOLID in the Context of Modern .NET</h2>
<h3 id="records-and-value-objects">Records and Value Objects</h3>
<p>C# <code>record</code> types naturally support SRP by encouraging small, focused data structures:</p>
<pre><code class="language-csharp">// Each record has one responsibility: representing a specific concept
public record Money(decimal Amount, string Currency);
public record Address(string Street, string City, string PostalCode, string Country);
public record CustomerName(string First, string Last)
{
    public string FullName =&gt; $&quot;{First} {Last}&quot;;
}
</code></pre>
<h3 id="pattern-matching-and-ocp">Pattern Matching and OCP</h3>
<p>C# pattern matching can sometimes replace polymorphism for simple cases, but be cautious — a <code>switch</code> expression over a discriminated union is fine for a closed set of types, but if the set of types grows over time, polymorphism is more maintainable:</p>
<pre><code class="language-csharp">// This is fine for a small, stable set of shapes
public decimal CalculateArea(Shape shape) =&gt; shape switch
{
    Circle c =&gt; Math.PI * c.Radius * c.Radius,
    Rectangle r =&gt; r.Width * r.Height,
    Triangle t =&gt; 0.5m * t.Base * t.Height,
    _ =&gt; throw new ArgumentException($&quot;Unknown shape: {shape.GetType().Name}&quot;)
};

// But if new shapes are added frequently, prefer an interface with a method:
public interface IShape
{
    decimal CalculateArea();
}
</code></pre>
<h3 id="minimal-apis-and-dip">Minimal APIs and DIP</h3>
<p>.NET minimal APIs work naturally with DIP:</p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);

// Register abstractions
builder.Services.AddScoped&lt;IOrderRepository, PostgresOrderRepository&gt;();
builder.Services.AddScoped&lt;IOrderService, OrderService&gt;();

var app = builder.Build();

// Endpoints depend on abstractions injected by the framework
app.MapPost(&quot;/orders&quot;, async (CreateOrderRequest request, IOrderService orderService) =&gt;
{
    var result = await orderService.CreateAsync(request);
    return result.IsSuccess ? Results.Created($&quot;/orders/{result.Order!.Id}&quot;, result.Order) : Results.BadRequest(result.Error);
});

app.Run();
</code></pre>
<h3 id="source-generators-and-isp">Source Generators and ISP</h3>
<p>Source generators in modern .NET can auto-implement interfaces, reducing the boilerplate of ISP. Libraries like Refit generate HTTP client implementations from interfaces, and EF Core generates much of the repository plumbing. These tools make ISP cheaper to apply in practice.</p>
<h3 id="primary-constructors">Primary Constructors</h3>
<p>C# 12 primary constructors reduce the boilerplate of DIP by eliminating explicit field declarations:</p>
<pre><code class="language-csharp">// Before C# 12
public class OrderService
{
    private readonly IOrderRepository _repository;
    private readonly INotificationService _notifications;
    private readonly ILogger&lt;OrderService&gt; _logger;

    public OrderService(
        IOrderRepository repository,
        INotificationService notifications,
        ILogger&lt;OrderService&gt; logger)
    {
        _repository = repository;
        _notifications = notifications;
        _logger = logger;
    }

    public async Task ProcessAsync(Order order)
    {
        _logger.LogInformation(&quot;Processing order {OrderId}&quot;, order.Id);
        await _repository.SaveAsync(order);
        await _notifications.SendAsync(order.CustomerEmail, &quot;Confirmed&quot;, &quot;...&quot;);
    }
}

// C# 12+ with primary constructors
public class OrderService(
    IOrderRepository repository,
    INotificationService notifications,
    ILogger&lt;OrderService&gt; logger) : IOrderService
{
    public async Task ProcessAsync(Order order)
    {
        logger.LogInformation(&quot;Processing order {OrderId}&quot;, order.Id);
        await repository.SaveAsync(order);
        await notifications.SendAsync(order.CustomerEmail, &quot;Confirmed&quot;, &quot;...&quot;);
    }
}
</code></pre>
<p>Primary constructors make DIP feel almost effortless. The dependency injection boilerplate shrinks dramatically while preserving all the benefits of abstraction and testability.</p>
<h2 id="part-10-practical-recommendations">Part 10: Practical Recommendations</h2>
<p>Here is a distilled set of actionable advice for applying SOLID in your day-to-day .NET development:</p>
<h3 id="when-to-apply-each-principle">When to Apply Each Principle</h3>
<p><strong>SRP</strong>: Apply always. Every class, module, and function should have a clear, singular purpose. This is the easiest principle to apply and the one with the most immediate benefit.</p>
<p><strong>OCP</strong>: Apply when you see a pattern of repeated modification to a class to support new variants. If a class has been opened and modified three times in the last three months to add a new case to a switch statement, it is time to apply OCP.</p>
<p><strong>LSP</strong>: Apply whenever you use inheritance. Before creating a subclass, ask: &quot;Can every function that works with the base type work correctly with this subclass?&quot; If the answer is &quot;not without special handling,&quot; reconsider the hierarchy.</p>
<p><strong>ISP</strong>: Apply when you see classes implementing interfaces where some methods throw <code>NotSupportedException</code>, return dummy values, or are simply empty. Also apply when changing one method on an interface forces recompilation of clients that do not use that method.</p>
<p><strong>DIP</strong>: Apply at architectural boundaries — where business logic meets infrastructure. Your domain logic should never directly reference <code>SqlConnection</code>, <code>HttpClient</code>, <code>SmtpClient</code>, or any other infrastructure class.</p>
<h3 id="the-refactoring-approach">The Refactoring Approach</h3>
<p>Rather than trying to design a perfectly SOLID system from scratch, follow this iterative approach:</p>
<ol>
<li><strong>Write the simple, obvious solution.</strong> Do not pre-abstract.</li>
<li><strong>Watch for pain points.</strong> Classes growing too large (SRP). Frequent modifications to add new cases (OCP). Unexpected behavior from subclasses (LSP). Interfaces with methods nobody uses (ISP). Untestable code (DIP).</li>
<li><strong>Refactor to address the specific pain.</strong> Extract a class. Extract an interface. Replace inheritance with composition.</li>
<li><strong>Repeat.</strong> Good design is a living process, not a one-time activity.</li>
</ol>
<h3 id="testing-as-a-solid-litmus-test">Testing as a SOLID Litmus Test</h3>
<p>If your code is hard to test, it almost certainly violates at least one SOLID principle:</p>
<ul>
<li><strong>Hard to instantiate a class?</strong> It probably creates its own dependencies (DIP violation).</li>
<li><strong>Need to set up too much state?</strong> The class probably has too many responsibilities (SRP violation).</li>
<li><strong>Tests break when unrelated code changes?</strong> Coupling is too high, likely from fat interfaces (ISP violation) or missing abstractions (OCP violation).</li>
<li><strong>Mock behaves differently from real implementation?</strong> The inheritance hierarchy might have LSP issues.</li>
</ul>
<p>Unit testing is both a beneficiary of SOLID design and a diagnostic tool for finding violations.</p>
<h2 id="part-11-solid-beyond-object-oriented-programming">Part 11: SOLID Beyond Object-Oriented Programming</h2>
<p>While SOLID was articulated for OOP, the underlying ideas transcend paradigm boundaries.</p>
<h3 id="srp-in-functional-programming">SRP in Functional Programming</h3>
<p>Functions should do one thing. A function that both validates input and transforms data is harder to compose and test than two separate functions. Functional programmers achieve SRP through small, composable functions rather than small classes.</p>
<h3 id="ocp-via-higher-order-functions">OCP via Higher-Order Functions</h3>
<p>In functional programming, you achieve OCP by passing behavior as arguments (higher-order functions) rather than by subclassing:</p>
<pre><code class="language-csharp">// OCP via function parameters — the processing logic is open for extension
public static IEnumerable&lt;T&gt; Filter&lt;T&gt;(IEnumerable&lt;T&gt; items, Func&lt;T, bool&gt; predicate)
    =&gt; items.Where(predicate);

// Add new filtering behavior without modifying Filter
var expensiveItems = Filter(products, p =&gt; p.Price &gt; 100);
var inStockItems = Filter(products, p =&gt; p.Stock &gt; 0);
var featuredItems = Filter(products, p =&gt; p.IsFeatured);
</code></pre>
<h3 id="dip-in-microservices">DIP in Microservices</h3>
<p>At the service level, DIP manifests as services depending on contracts (API schemas, message formats, event definitions) rather than on each other's implementations. If Service A publishes an event and Service B consumes it, both depend on the event schema (the abstraction), not on each other's internal code.</p>
<h2 id="part-12-resources-and-further-reading">Part 12: Resources and Further Reading</h2>
<p>If you want to go deeper into SOLID and related design topics, here are the most authoritative resources:</p>
<ul>
<li><strong>Robert C. Martin, <em>Agile Software Development: Principles, Patterns, and Practices</em> (2003)</strong> — The definitive book on SOLID with C++ and Java examples. The 2006 C# edition (with Micah Martin) covers the same material with .NET examples.</li>
<li><strong>Robert C. Martin, <em>Clean Architecture: A Craftsman's Guide to Software Structure and Design</em> (2018)</strong> — Extends SOLID principles to architectural concerns, with updated thinking on SRP.</li>
<li><strong>Bertrand Meyer, <em>Object-Oriented Software Construction, 2nd Edition</em> (1997)</strong> — The source of the Open/Closed Principle and Design by Contract. Dense but foundational.</li>
<li><strong>Barbara Liskov and Jeannette Wing, <em>A Behavioral Notion of Subtyping</em> (1994)</strong> — The formal paper on the Liskov Substitution Principle. Available from Carnegie Mellon's technical reports.</li>
<li><strong>Robert C. Martin's original papers</strong> — Available at <a href="http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod">butunclebob.com</a>. The original articles on OCP, LSP, DIP, and ISP are short, readable, and illuminating.</li>
<li><strong>Microsoft's .NET Architecture Guides</strong> — <a href="https://docs.microsoft.com/en-us/dotnet/architecture/">docs.microsoft.com/en-us/dotnet/architecture</a> covers clean architecture patterns using SOLID principles with ASP.NET Core.</li>
<li><strong>Mark Seemann, <em>Dependency Injection in .NET</em> (2019, 2nd Edition)</strong> — Deep dive into DIP and DI patterns specifically in the .NET ecosystem.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>The SOLID principles are not a checklist to be applied mechanically to every class in every project. They are a set of heuristics — mental tools — for recognizing and addressing design problems before they metastasize into unmaintainable code.</p>
<p>Single Responsibility keeps your classes small and focused. Open/Closed lets you add behavior without risking what already works. Liskov Substitution ensures that your inheritance hierarchies are sound and your polymorphism is trustworthy. Interface Segregation prevents your clients from depending on capabilities they do not need. Dependency Inversion decouples your business logic from infrastructure, making your code testable and adaptable.</p>
<p>None of these principles are free. Abstraction has a cost — in indirection, in the number of files to navigate, in the time spent designing interfaces. The art is in knowing when the cost is worth paying. For a throwaway script, it usually is not. For a production system that will be maintained for years, by multiple developers, through changing requirements, it almost always is.</p>
<p>Start simple. Write code that works. Feel the pain when it resists change. Then apply the principle that addresses that specific pain. Over time, this builds an instinct for design that no checklist can replace.</p>
]]></content:encoded>
      <category>csharp</category>
      <category>dotnet</category>
      <category>solid</category>
      <category>design-principles</category>
      <category>object-oriented-programming</category>
      <category>clean-code</category>
      <category>software-architecture</category>
      <category>best-practices</category>
      <category>deep-dive</category>
    </item>
    <item>
      <title>The Cloud Toilet Problem: Why Your AI Tools Need an On-Premises Fallback</title>
      <link>https://observermagazine.github.io/blog/the-cloud-toilet-problem</link>
      <description>What happens when every toilet in your availability zone goes down? A practical guide for ASP.NET developers on building resilient applications that survive cloud AI outages.</description>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/the-cloud-toilet-problem</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="a-modest-proposal">A Modest Proposal</h2>
<p>Imagine, for a moment, that nobody had a toilet at home.</p>
<p>Instead, every household subscribed to a managed restroom service. A gleaming porcelain throne, maintained by professionals, cleaned on a schedule, always stocked with the finest two-ply. You would never have to scrub a bowl again. You would never have to unclog a drain. You would never have to argue with your family about who left the seat up. The Toilet-as-a-Service provider would handle everything.</p>
<p>Sounds convenient, right? Almost too convenient. The marketing writes itself: &quot;Focus on what matters. Let us handle the rest.&quot;</p>
<p>Now imagine it is 2 AM, you ate something questionable at dinner, and every single managed restroom in your availability zone is returning <code>503 Service Unavailable</code>. The status page reads: &quot;We are currently investigating elevated error rates in the Porcelain Pipeline. A fix is being implemented.&quot; You are standing in your hallway, crossing your legs, refreshing a dashboard on your phone, waiting for an incident to resolve.</p>
<p>You are, quite literally, out of luck.</p>
<p>This scenario sounds absurd because — for plumbing, at least — we collectively decided centuries ago that certain infrastructure is too critical to outsource entirely. You have a toilet at home. You have running water at home. You have electricity at home (and if you have been through enough storms, maybe a generator too). The cloud exists, but there is always a local fallback for the things that truly matter.</p>
<p>And yet, for AI-powered software tools — tools that developers, lawyers, designers, and medical professionals increasingly depend on for their daily work — we have somehow accepted a world with no toilet at home.</p>
<h2 id="this-is-not-a-hypothetical">This Is Not a Hypothetical</h2>
<p>If you are reading this article on March 30, 2026, you may have fresh memories of what happened this week. In fact, if you are an AI-assisted developer, you almost certainly do.</p>
<p>On March 25, Anthropic's Claude service experienced a sharp disruption that generated roughly 4,000 user reports on Downdetector at its peak. The chat interface, the mobile app, and Claude Code — the command-line developer tool — were all affected. Two days later, on March 27, elevated error rates returned on Claude Opus 4.6, with Sonnet 4.6 also showing issues before partially recovering. These were not isolated events. Earlier in March, Claude went down on March 2 and again on March 3. On March 17, free users were locked out. On March 18, Claude Code authentication broke for over eight hours. On March 21, both Opus and Sonnet models experienced elevated errors simultaneously.</p>
<p>Anthropic is not alone. A massive Cloudflare outage in November 2025 knocked out thousands of websites and services — including ChatGPT and OpenAI's Sora — affecting billions of users globally. ChatGPT itself suffered an extended outage exceeding 15 hours on June 10, 2025. And on this very day, March 27, 2026, Adobe is experiencing outages across Express, Photoshop, Acrobat, and other Creative Cloud services.</p>
<p>The pattern is clear. Cloud AI services go down. They go down often. They go down at the worst possible times. And when they go down, you cannot do your work.</p>
<h2 id="the-real-cost-of-cloud-dependency">The Real Cost of Cloud Dependency</h2>
<p>Here is where the abstract becomes concrete. You are an ASP.NET developer working on a deadline. Your team uses Claude Code to refactor a legacy .NET Framework application to .NET 10. You use GitHub Copilot to scaffold tests. Your designer uses Adobe Firefly to generate assets. Your project manager uses ChatGPT to draft the release notes and client communications.</p>
<p>It is Thursday afternoon. The client demo is Friday morning. You try to ask Claude for help with a tricky middleware registration issue and see this:</p>
<blockquote>
<p>Claude's response was interrupted. This can be caused by network problems or exceeding the maximum conversation length. Please contact support if the issue persists.</p>
</blockquote>
<p>You switch to ChatGPT. It is sluggish and timing out. You try Copilot; it is returning garbage completions because the backing model is overloaded. Your designer messages you: &quot;Firefly is broken, can't generate the hero image.&quot; Your PM says: &quot;ChatGPT won't load, I'll just write the release notes myself.&quot;</p>
<p>Your entire team's productivity has been outsourced to infrastructure you do not control, cannot inspect, and cannot fix. You are waiting for someone else's incident to resolve so you can do your job.</p>
<p>Now scale that scenario up. You are not building a demo for a client. You are a hospital deploying AI-assisted diagnostic tools. You are a law firm using AI to review discovery documents for a case with a filing deadline. You are a financial institution using AI for real-time fraud detection. The service goes down, and real harm follows.</p>
<p>This is not a technology problem. It is an architecture problem. And architecture problems have architecture solutions.</p>
<h2 id="the-resilience-pattern-cloud-first-local-fallback">The Resilience Pattern: Cloud-First, Local-Fallback</h2>
<p>The solution is not to abandon cloud AI. Cloud-hosted models like Claude Opus 4.6, GPT-4o, and Gemini offer capabilities that are genuinely difficult to replicate locally. The solution is to stop treating cloud AI as a single point of failure.</p>
<p>As ASP.NET developers, we already understand this pattern. We do not build web applications with a single database server and no failover. We do not deploy to a single region with no disaster recovery plan. We use circuit breakers, retry policies, and graceful degradation. The same principles apply to AI integration.</p>
<p>Here is what the architecture looks like in practice.</p>
<h3 id="the-interface">The Interface</h3>
<p>Start with an abstraction. Your application code should never call a specific AI provider directly. Instead, define a contract:</p>
<pre><code class="language-csharp">public interface IAiCompletionService
{
    Task&lt;CompletionResult&gt; CompleteAsync(
        CompletionRequest request,
        CancellationToken cancellationToken = default);
}

public sealed record CompletionRequest
{
    public required string Prompt { get; init; }
    public string? SystemMessage { get; init; }
    public int MaxTokens { get; init; } = 1024;
    public double Temperature { get; init; } = 0.7;
}

public sealed record CompletionResult
{
    public required string Text { get; init; }
    public required string Provider { get; init; }
    public TimeSpan Latency { get; init; }
    public bool IsFallback { get; init; }
}
</code></pre>
<p>This is not revolutionary software engineering. It is the same Dependency Inversion Principle you learned on day one of SOLID. But an astonishing number of codebases call the OpenAI SDK directly from their controllers. When that SDK cannot reach its server, the entire feature breaks with no alternative.</p>
<h3 id="the-cloud-implementation">The Cloud Implementation</h3>
<p>Your primary implementation calls your preferred cloud provider. Here is a simplified example using the Anthropic API:</p>
<pre><code class="language-csharp">public sealed class CloudAiService(
    HttpClient httpClient,
    ILogger&lt;CloudAiService&gt; logger) : IAiCompletionService
{
    public async Task&lt;CompletionResult&gt; CompleteAsync(
        CompletionRequest request,
        CancellationToken cancellationToken = default)
    {
        var stopwatch = Stopwatch.StartNew();

        var payload = new
        {
            model = &quot;claude-sonnet-4-20250514&quot;,
            max_tokens = request.MaxTokens,
            messages = new[]
            {
                new { role = &quot;user&quot;, content = request.Prompt }
            }
        };

        var response = await httpClient.PostAsJsonAsync(
            &quot;https://api.anthropic.com/v1/messages&quot;,
            payload,
            cancellationToken);

        response.EnsureSuccessStatusCode();

        var result = await response.Content
            .ReadFromJsonAsync&lt;AnthropicResponse&gt;(cancellationToken);

        stopwatch.Stop();

        logger.LogInformation(
            &quot;Cloud completion succeeded in {Latency}ms via {Provider}&quot;,
            stopwatch.ElapsedMilliseconds,
            &quot;Anthropic&quot;);

        return new CompletionResult
        {
            Text = result?.Content?.FirstOrDefault()?.Text ?? &quot;&quot;,
            Provider = &quot;Anthropic Claude&quot;,
            Latency = stopwatch.Elapsed,
            IsFallback = false
        };
    }
}
</code></pre>
<h3 id="the-local-fallback">The Local Fallback</h3>
<p>Your fallback implementation runs entirely on-premises. In 2026, the local AI ecosystem is mature enough for this to be practical. Ollama — think of it as Docker for language models — lets you pull and run open-weight models with a single command. It exposes an OpenAI-compatible API on <code>localhost:11434</code>, which means your fallback implementation looks almost identical to your cloud implementation:</p>
<pre><code class="language-csharp">public sealed class LocalAiService(
    HttpClient httpClient,
    ILogger&lt;LocalAiService&gt; logger) : IAiCompletionService
{
    public async Task&lt;CompletionResult&gt; CompleteAsync(
        CompletionRequest request,
        CancellationToken cancellationToken = default)
    {
        var stopwatch = Stopwatch.StartNew();

        var payload = new
        {
            model = &quot;llama4:8b&quot;,
            messages = new[]
            {
                new { role = &quot;user&quot;, content = request.Prompt }
            }
        };

        var response = await httpClient.PostAsJsonAsync(
            &quot;http://localhost:11434/v1/chat/completions&quot;,
            payload,
            cancellationToken);

        response.EnsureSuccessStatusCode();

        var result = await response.Content
            .ReadFromJsonAsync&lt;OllamaResponse&gt;(cancellationToken);

        stopwatch.Stop();

        logger.LogInformation(
            &quot;Local completion succeeded in {Latency}ms via {Provider}&quot;,
            stopwatch.ElapsedMilliseconds,
            &quot;Ollama/Llama4&quot;);

        return new CompletionResult
        {
            Text = result?.Choices?.FirstOrDefault()?.Message?.Content ?? &quot;&quot;,
            Provider = &quot;Local Ollama (Llama 4 8B)&quot;,
            Latency = stopwatch.Elapsed,
            IsFallback = true
        };
    }
}
</code></pre>
<p>The local model will not be as capable as Claude Opus or GPT-4o for complex reasoning tasks. That is fine. A less capable model that is available beats a more capable model that is not. When the cloud comes back, traffic automatically shifts to the primary provider. Your users never see an error page.</p>
<h3 id="the-circuit-breaker">The Circuit Breaker</h3>
<p>Now wire them together with a resilience layer. In ASP.NET, you can use Microsoft's built-in resilience libraries (formerly Polly) to create a circuit breaker that detects when the cloud provider is failing and automatically routes to the local fallback:</p>
<pre><code class="language-csharp">public sealed class ResilientAiService(
    CloudAiService cloudService,
    LocalAiService localService,
    ILogger&lt;ResilientAiService&gt; logger) : IAiCompletionService
{
    private readonly ResiliencePipeline pipeline = new ResiliencePipelineBuilder()
        .AddCircuitBreaker(new CircuitBreakerStrategyOptions
        {
            FailureRatio = 0.5,
            SamplingDuration = TimeSpan.FromSeconds(30),
            MinimumThroughput = 3,
            BreakDuration = TimeSpan.FromMinutes(1)
        })
        .AddTimeout(TimeSpan.FromSeconds(30))
        .Build();

    public async Task&lt;CompletionResult&gt; CompleteAsync(
        CompletionRequest request,
        CancellationToken cancellationToken = default)
    {
        try
        {
            return await pipeline.ExecuteAsync(
                async ct =&gt; await cloudService.CompleteAsync(request, ct),
                cancellationToken);
        }
        catch (Exception ex) when (
            ex is BrokenCircuitException or
            TimeoutRejectedException or
            HttpRequestException)
        {
            logger.LogWarning(
                ex,
                &quot;Cloud AI unavailable, falling back to local model&quot;);

            return await localService.CompleteAsync(request, cancellationToken);
        }
    }
}
</code></pre>
<p>This is the same pattern you would use for a database failover or a CDN fallback. The cloud provider is the primary. When it fails — whether due to network issues, rate limiting, or an outage — the circuit breaker opens and traffic routes to the local model. After the break duration expires, the circuit breaker lets a test request through to see if the cloud has recovered. If it has, traffic shifts back automatically.</p>
<h3 id="registration-in-program.cs">Registration in Program.cs</h3>
<p>Wire it all up in your ASP.NET application's dependency injection container:</p>
<pre><code class="language-csharp">// Cloud AI client
builder.Services.AddHttpClient&lt;CloudAiService&gt;(client =&gt;
{
    client.DefaultRequestHeaders.Add(&quot;x-api-key&quot;, builder.Configuration[&quot;Anthropic:ApiKey&quot;]!);
    client.DefaultRequestHeaders.Add(&quot;anthropic-version&quot;, &quot;2023-06-01&quot;);
});

// Local AI client (Ollama on localhost)
builder.Services.AddHttpClient&lt;LocalAiService&gt;(client =&gt;
{
    client.BaseAddress = new Uri(&quot;http://localhost:11434&quot;);
});

// Register the resilient wrapper as the interface implementation
builder.Services.AddSingleton&lt;CloudAiService&gt;();
builder.Services.AddSingleton&lt;LocalAiService&gt;();
builder.Services.AddSingleton&lt;IAiCompletionService, ResilientAiService&gt;();
</code></pre>
<p>Any controller, service, or Razor component that injects <code>IAiCompletionService</code> now automatically gets the resilient version. They do not know or care whether the response came from Claude or from a local Llama model. They just get an answer.</p>
<h2 id="setting-up-your-local-fallback">Setting Up Your Local Fallback</h2>
<p>If you have never run a local language model before, the barrier to entry is remarkably low in 2026.</p>
<h3 id="install-ollama">Install Ollama</h3>
<p>On Linux or macOS, it is a single command:</p>
<pre><code class="language-bash">curl -fsSL https://ollama.com/install.sh | sh
</code></pre>
<p>On Windows, download the installer from ollama.com. Ollama runs as a background service and exposes its API on port 11434.</p>
<h3 id="pull-a-model">Pull a Model</h3>
<p>Choose a model based on your hardware. For a developer workstation with 16 GB of RAM:</p>
<pre><code class="language-bash"># General purpose — great balance of capability and speed
ollama pull llama4:8b

# Smaller and faster, good for code tasks
ollama pull qwen3:8b

# If you have 32+ GB RAM, the 70B models are impressively capable
ollama pull llama3.3:70b
</code></pre>
<p>The models download once and are cached locally. After the initial download, they load in seconds.</p>
<h3 id="verify-it-works">Verify It Works</h3>
<pre><code class="language-bash">curl http://localhost:11434/v1/chat/completions \
  -H &quot;Content-Type: application/json&quot; \
  -d '{
    &quot;model&quot;: &quot;llama4:8b&quot;,
    &quot;messages&quot;: [
      {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Write a C# record for a blog post with title, date, and tags.&quot;}
    ]
  }'
</code></pre>
<p>That is it. You now have a local AI endpoint that will never go down because of someone else's infrastructure problem. It will go down if your machine loses power, of course — but at that point you have bigger problems than AI availability.</p>
<h2 id="beyond-ai-the-broader-cloud-dependency-problem">Beyond AI: The Broader Cloud Dependency Problem</h2>
<p>The toilet analogy extends beyond AI. Adobe Creative Cloud has experienced 258 incidents in the last 90 days — 89 of them classified as major outages, with a median resolution time of over two hours. On March 27, 2026 — the same day Claude was struggling with Opus 4.6 errors — Adobe Express, Photoshop, Acrobat, and several other services were simultaneously experiencing outages.</p>
<p>GitHub itself has had notable outages. When GitHub goes down, millions of developers cannot push code, review pull requests, or trigger CI/CD pipelines.</p>
<p>The pattern repeats across the industry. We have collectively moved critical workflows to cloud services — source control, CI/CD, design tools, communication, project management, AI assistance — and each one represents a potential single point of failure.</p>
<p>This does not mean cloud services are bad. They are extraordinarily useful. But the question every engineering team should ask is: &quot;If this service goes down for four hours on a Friday afternoon before a Monday deadline, what is our plan?&quot;</p>
<p>For many teams, the honest answer is: &quot;We don't have one.&quot;</p>
<h2 id="what-asp.net-developers-can-do-today">What ASP.NET Developers Can Do Today</h2>
<p>Here are concrete steps you can take right now to reduce your exposure to cloud AI outages.</p>
<p><strong>First, define your AI integration contract as an interface.</strong> If you are already calling the OpenAI or Anthropic SDK directly from your controllers, refactor it behind an abstraction. This takes an hour and pays dividends forever. Even if you never implement a local fallback, the interface makes it trivial to swap providers when pricing changes or a new model launches.</p>
<p><strong>Second, install Ollama on your development machine.</strong> Pull a model. Run a few prompts. Get comfortable with the local inference API. The quality of open-weight models in 2026 is genuinely impressive — Llama 4, Qwen 3, DeepSeek V3, and Mistral Large 3 are all capable enough for many production tasks.</p>
<p><strong>Third, add a health check for your AI dependencies.</strong> ASP.NET's health check middleware makes this straightforward:</p>
<pre><code class="language-csharp">builder.Services.AddHealthChecks()
    .AddUrlGroup(
        new Uri(&quot;https://api.anthropic.com/v1/models&quot;),
        name: &quot;anthropic-api&quot;,
        failureStatus: HealthStatus.Degraded)
    .AddUrlGroup(
        new Uri(&quot;http://localhost:11434/api/tags&quot;),
        name: &quot;ollama-local&quot;,
        failureStatus: HealthStatus.Degraded);
</code></pre>
<p>Now your monitoring dashboard shows you at a glance whether your primary and fallback AI providers are reachable. When the cloud provider turns red, you know your circuit breaker is routing traffic locally — and you can tell your team before they notice.</p>
<p><strong>Fourth, implement the circuit breaker pattern.</strong> The code above is a starting point. In production, you will want to add metrics (how many requests are going to the fallback versus the primary?), alerts (notify the team when the circuit opens), and possibly a manual override (force-use the local model when you know the cloud is having issues but the circuit breaker has not tripped yet).</p>
<p><strong>Fifth, consider what &quot;good enough&quot; means for your use case.</strong> Not every AI-powered feature needs the most capable model available. A local 8B parameter model is more than sufficient for code autocompletion, text summarization, data extraction, and many classification tasks. Reserve the cloud-hosted frontier models for tasks that genuinely require them: complex multi-step reasoning, long-context analysis, and creative generation. This is not just a resilience strategy — it also reduces your API costs.</p>
<h2 id="the-bigger-picture">The Bigger Picture</h2>
<p>There is a philosophical dimension to this problem that goes beyond architecture patterns and circuit breakers.</p>
<p>When we moved from desktop software to web applications, we gained collaboration, automatic updates, and device independence. We lost the ability to work offline. When we moved from on-premises servers to the cloud, we gained elasticity, managed services, and global distribution. We lost direct control over our infrastructure.</p>
<p>Each transition involved a trade-off, and each time, the industry collectively decided the trade-off was worth it. But the trade-offs compound. A developer in 2026 who uses GitHub for source control, GitHub Actions for CI/CD, Vercel for hosting, Claude for coding assistance, Figma for design, Linear for project management, and Slack for communication has outsourced virtually every aspect of their workflow to services they do not control. If any one of them goes down, work slows. If two or three go down simultaneously — as happened this week — work stops.</p>
<p>The cloud toilet problem is not about any single service. It is about the aggregate risk of depending on many cloud services simultaneously, each with its own failure modes, each with its own incident response team, none of which you can influence.</p>
<p>The solution, as with plumbing, is not to reject the cloud entirely. Municipal water systems are wonderful. But you keep a few bottles of water in the pantry. You know where your shutoff valve is. You have a plunger next to the toilet.</p>
<p>The software equivalent is: keep your critical tools running locally. Have a fallback. Know where your shutoff valve is.</p>
<h2 id="a-note-on-legal-and-contractual-risk">A Note on Legal and Contractual Risk</h2>
<p>This article has focused on developer productivity, but the stakes can be much higher.</p>
<p>If you are building software under contract — and most of us are, whether we are consultants, agency developers, or in-house teams with SLAs — a cloud AI outage is not an excuse for a missed deadline. Your client does not care that Claude was down. Your client cares that the deliverable was due on Friday and it is not done.</p>
<p>Courts have not yet established clear precedent on whether a cloud service outage constitutes force majeure for downstream obligations. If your contract says you will deliver a working system by March 31 and your AI toolchain goes down on March 28, the legal question of who bears the risk is unsettled at best.</p>
<p>The prudent approach is to treat cloud AI the same way you treat any other external dependency: plan for it to fail. If your delivery timeline depends on a service with 99.5% uptime — which is roughly what most cloud AI providers achieve — that means you will experience roughly 44 hours of downtime per year. Almost two full days. Can your project schedule absorb that?</p>
<h2 id="open-weight-models-your-insurance-policy">Open-Weight Models: Your Insurance Policy</h2>
<p>The state of open-weight models in 2026 deserves its own discussion because it directly affects the viability of local fallbacks.</p>
<p>Meta's Llama 4 family includes an 8B parameter model that runs comfortably on a laptop with 16 GB of RAM. For code generation, instruction following, and general-purpose chat, it is shockingly good. It will not match Claude Opus on complex reasoning tasks, but for 90% of the prompts a working developer sends on an average day — &quot;refactor this method,&quot; &quot;write a unit test for this class,&quot; &quot;explain this error message&quot; — it is entirely adequate.</p>
<p>Qwen 3 from Alibaba includes specialized coding variants that rival much larger models on programming benchmarks. DeepSeek V3 excels at mathematical reasoning. Mistral Large 3 handles multilingual tasks well. OpenAI itself released gpt-oss, its first open-weight models since GPT-2, with a 120B parameter version that runs on a single 80 GB GPU.</p>
<p>The point is that &quot;local AI&quot; no longer means &quot;toy AI.&quot; The gap between cloud-hosted frontier models and locally-runnable open-weight models has narrowed dramatically. For many practical tasks, the local model is good enough — and &quot;good enough and available&quot; always beats &quot;excellent and unavailable.&quot;</p>
<h2 id="conclusion-keep-a-toilet-at-home">Conclusion: Keep a Toilet at Home</h2>
<p>The cloud is not going away, and it should not. Managed services are one of the great productivity multipliers of modern software development. But we have overcorrected. We have outsourced so much to the cloud that many of us literally cannot do our jobs when the cloud has a bad day.</p>
<p>The fix is not complicated. It is the same engineering discipline we apply to every other part of our systems: assume failure, build fallbacks, degrade gracefully.</p>
<p>Define your AI contracts as interfaces. Implement a cloud-primary, local-fallback architecture. Use circuit breakers to route traffic automatically. Install Ollama and pull a model. Test your fallback regularly.</p>
<p>And for everything that truly matters — keep a toilet at home.</p>
]]></content:encoded>
      <category>cloud</category>
      <category>ai</category>
      <category>architecture</category>
      <category>resilience</category>
      <category>aspnet</category>
      <category>opinion</category>
    </item>
    <item>
      <title>Why QA Matters More Than Ever: The Case for Slowing Down in a World of AI-Generated Code</title>
      <link>https://observermagazine.github.io/blog/why-qa-matters-more-than-ever</link>
      <description>As AI tools accelerate code output by 76 percent and change failure rates climb by 30 percent, the argument for dedicated QA has never been stronger. This deep dive explores why quality assurance is not a luxury — it is the last line of defense between your users and an avalanche of untested code.</description>
      <pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/why-qa-matters-more-than-ever</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction-the-four-clicks-that-brought-down-staging">Introduction: The Four Clicks That Brought Down Staging</h2>
<p>Picture this. It is a Thursday afternoon. Your team has been shipping features at a pace that would have been unimaginable two years ago. The sprint review is tomorrow. CI is green. Code coverage is at 82 percent. Static analysis is clean. The tech lead has signed off on every pull request. Life is good.</p>
<p>Then the QA engineer sits down with the staging build, clicks four buttons in a specific sequence with roughly the right timing, and the application throws an unhandled exception. Every single time. Not a flaky test. Not a cosmic ray. A reproducible, deterministic crash that has been lurking in the codebase since Tuesday's merge.</p>
<p>Should this have been caught before a single line of code was written? Absolutely. Should the requirements document have specified the interaction between those four UI elements? Without question. Should a unit test have caught it? An integration test? An end-to-end test? A code review? Maybe — but none of them did. The only thing that caught it was a human being who thought like a user, explored the application like a user, and broke it like a user. That human being was a QA engineer.</p>
<p>This is not a hypothetical. Scenarios like this happen every week in teams across the industry, including ours. And as we barrel headlong into a world where AI generates an ever-growing share of our code, these scenarios are not becoming less common. They are becoming more common. The question is no longer whether your team needs QA. The question is whether your team can survive without it.</p>
<h2 id="part-1-the-utopian-vision-and-why-it-falls-apart">Part 1: The Utopian Vision (and Why It Falls Apart)</h2>
<p>There is a beautiful vision of software development that has circulated through conference talks and management consulting decks for the better part of two decades. It goes something like this: if wishes were fishes, QA engineers would not need to exist as a separate discipline. Every team would be truly cross-functional. Every developer would write perfect tests. Every product manager would produce requirements so precise that ambiguity would be impossible. Every team member could do any work that might be needed, and anyone could take time off at any moment because the team has full coverage. The world would be a beautiful place.</p>
<p>This vision is not entirely wrong. Cross-functional teams are genuinely better than siloed ones. Developers who write tests produce better code than developers who do not. Shift-left testing — catching bugs earlier in the development lifecycle — is a real and valuable practice. These ideas have merit, and the best teams in the world incorporate all of them.</p>
<p>But the vision falls apart when it collides with reality. Here is why.</p>
<h3 id="human-cognition-has-limits">Human Cognition Has Limits</h3>
<p>When a developer writes a feature and then writes the tests for that feature, they are testing their own mental model of how the feature works. This is valuable, but it is inherently limited. The developer knows what the code is supposed to do, and they write tests that verify the code does what they intended. What they rarely test is the space between their intention and the user's expectation.</p>
<p>This is not a character flaw. It is a well-documented cognitive bias called the &quot;curse of knowledge.&quot; Once you know how something works internally, it becomes genuinely difficult to imagine how someone who does not know would interact with it. A QA engineer who did not write the code approaches the feature with fresh eyes, different assumptions, and — critically — a different mental model. They think about what happens when the user double-clicks instead of single-clicks. They think about what happens when the user navigates backward. They think about what happens when the user leaves the page open for 45 minutes and then tries to submit a form.</p>
<h3 id="cross-functional-does-not-mean-interchangeable">Cross-Functional Does Not Mean Interchangeable</h3>
<p>The Agile manifesto encourages cross-functional teams, but cross-functional does not mean every person does every job. A cross-functional team has all the skills needed to deliver a feature. That includes development, design, testing, operations, and domain expertise. The idea that a developer can simply &quot;also do QA&quot; is as reductive as saying a QA engineer can &quot;also write the backend.&quot; People have specializations for a reason. A senior QA engineer has spent years developing an intuition for where bugs hide, what edge cases matter, and how users actually behave. That intuition is not something you acquire by adding a few test cases to your pull request.</p>
<h3 id="coverage-numbers-lie">Coverage Numbers Lie</h3>
<p>Here is a dirty secret about test coverage: 100 percent code coverage does not mean your application works. It means every line of code was executed during a test. It says nothing about whether the right assertions were made, whether the test inputs were meaningful, or whether the interactions between components were exercised. You can have 100 percent line coverage and still have a race condition that only manifests when two specific API calls arrive within three milliseconds of each other.</p>
<p>Consider this seemingly innocent ASP.NET controller action:</p>
<pre><code class="language-csharp">[HttpPost(&quot;transfer&quot;)]
public async Task&lt;IActionResult&gt; TransferFunds(TransferRequest request)
{
    var sourceAccount = await _db.Accounts
        .FirstOrDefaultAsync(a =&gt; a.Id == request.SourceAccountId);

    if (sourceAccount is null)
        return NotFound(&quot;Source account not found&quot;);

    if (sourceAccount.Balance &lt; request.Amount)
        return BadRequest(&quot;Insufficient funds&quot;);

    var destinationAccount = await _db.Accounts
        .FirstOrDefaultAsync(a =&gt; a.Id == request.DestinationAccountId);

    if (destinationAccount is null)
        return NotFound(&quot;Destination account not found&quot;);

    sourceAccount.Balance -= request.Amount;
    destinationAccount.Balance += request.Amount;

    await _db.SaveChangesAsync();
    return Ok();
}
</code></pre>
<p>This code will pass every unit test you throw at it. It reads cleanly. It handles nulls. It validates the balance. A code reviewer would likely approve it without comment. But it has a race condition hiding in plain sight. If two concurrent requests arrive to transfer funds from the same account, both requests can read the balance before either has decremented it, and the account ends up in an inconsistent state. The balance check passes for both requests, but the account is debited twice, potentially going negative.</p>
<p>A unit test will never catch this because unit tests run sequentially. An integration test might not catch it because reproducing the timing is difficult in an automated test. But a QA engineer who has seen this pattern before, who knows to open two browser tabs and click &quot;Submit&quot; in rapid succession? They will find it in minutes.</p>
<h2 id="part-2-the-ai-amplification-effect">Part 2: The AI Amplification Effect</h2>
<p>If the case for QA was strong before the AI revolution, it has become overwhelming since. The numbers are staggering.</p>
<h3 id="the-output-explosion">The Output Explosion</h3>
<p>AI coding tools have fundamentally changed the volume, velocity, and risk profile of code entering the pipeline. The average developer now submits approximately 7,800 lines of code per month, up from roughly 4,450, representing a 76 percent increase in output per person. For mid-size teams, the increase is even more dramatic. Pull requests per author have risen significantly, while review capacity has not scaled to match.</p>
<p>This is not a criticism of AI tools. They are genuinely useful. They help developers write boilerplate faster, explore unfamiliar APIs, and prototype ideas quickly. But every line of AI-generated code is a line that needs to be tested, reviewed, and understood. And the evidence suggests that the testing capacity of most organizations has not kept pace with the output increase.</p>
<h3 id="failure-rates-are-climbing">Failure Rates Are Climbing</h3>
<p>Incidents per pull request have increased by 23.5 percent, and change failure rates have risen roughly 30 percent. This is the predictable consequence of producing more code without proportionally increasing the investment in verification. The bottleneck has shifted. It is no longer creation — it is verification.</p>
<h3 id="ai-code-has-a-specific-bug-profile">AI Code Has a Specific Bug Profile</h3>
<p>AI-generated code tends to produce a particular category of bugs that are difficult for automated tests to catch. These bugs arise because large language models optimize for plausibility, not correctness. The code looks right. It follows patterns the model has seen in training data. It compiles. It passes lint. But it may contain subtle logical errors, incorrect assumptions about API behavior, or security vulnerabilities that only surface under specific conditions.</p>
<p>AI-produced code can hide subtle performance bugs, security gaps, or odd logic patterns that only surface under real pressure. Some QA teams have responded by creating specialized checklists for reviewing AI-generated code — things to look for when the code was written by a model rather than a person.</p>
<p>Consider a real-world scenario. A developer asks an AI tool to generate a caching layer for an ASP.NET application. The AI produces something like this:</p>
<pre><code class="language-csharp">public class UserCacheService
{
    private static readonly Dictionary&lt;int, UserDto&gt; _cache = new();
    private readonly IUserRepository _repository;

    public UserCacheService(IUserRepository repository)
    {
        _repository = repository;
    }

    public async Task&lt;UserDto&gt; GetUserAsync(int userId)
    {
        if (_cache.TryGetValue(userId, out var cached))
            return cached;

        var user = await _repository.GetByIdAsync(userId);
        if (user is not null)
            _cache[userId] = user;

        return user;
    }
}
</code></pre>
<p>This code looks perfectly reasonable. It compiles. It has clear intent. A quick code review might approve it. But it has at least three problems that a QA engineer would eventually surface:</p>
<ol>
<li><p>The <code>Dictionary&lt;int, UserDto&gt;</code> is not thread-safe. In an ASP.NET application where multiple requests hit this service concurrently, you will get corrupted state, lost updates, or <code>InvalidOperationException</code> from concurrent enumeration. The fix is <code>ConcurrentDictionary&lt;int, UserDto&gt;</code>.</p>
</li>
<li><p>The cache never expires. Once a user is loaded, the cached version is served forever, even if the underlying data changes. In a long-running application, this leads to stale data bugs that are maddening to diagnose.</p>
</li>
<li><p>When the cache misses, there is no protection against the thundering herd problem. If a hundred requests arrive simultaneously for the same uncached user, all hundred will hit the database. The fix is to use <code>SemaphoreSlim</code> or a library like <code>LazyCache</code> that provides lock-per-key semantics.</p>
</li>
</ol>
<p>None of these bugs will appear in a unit test that exercises the method once with a single thread. They appear when a QA engineer puts the application under realistic load, navigates aggressively, and watches for inconsistencies over time.</p>
<h2 id="part-3-the-testing-pyramid-is-necessary-but-not-sufficient">Part 3: The Testing Pyramid Is Necessary but Not Sufficient</h2>
<p>Every developer is taught the testing pyramid early in their career. Unit tests at the base. Integration tests in the middle. End-to-end tests at the top. More of the cheap, fast tests. Fewer of the expensive, slow ones. It is a useful mental model, and teams that follow it are better off than teams that do not.</p>
<p>But the pyramid has a blind spot: it assumes that the thing being tested is well-specified to begin with. If the requirements are ambiguous, the unit tests will faithfully verify the wrong behavior. If the interaction between two components was never documented, no integration test will cover it. If the user experience depends on timing, animation state, or the order of asynchronous operations, end-to-end tests may not be deterministic enough to catch the problem.</p>
<h3 id="unit-tests-the-foundation">Unit Tests: The Foundation</h3>
<p>Unit tests are the bedrock of any quality strategy. In a .NET project, they are fast, isolated, and give you immediate feedback when a method's contract changes. Here is a typical example from our own codebase:</p>
<pre><code class="language-csharp">[Fact]
public void FrontMatter_ParsesAllFields()
{
    var markdown = &quot;&quot;&quot;
        ---
        title: Test Post
        date: 2026-03-01
        author: observer-team
        summary: A test summary
        tags:
          - test
          - integration
        featured: true
        series: Test Series
        image: /images/test.jpg
        ---
        ## Hello

        This is the body.
        &quot;&quot;&quot;;

    var (frontMatter, body) = ParseFrontMatter(markdown);

    Assert.Equal(&quot;Test Post&quot;, frontMatter.Title);
    Assert.Equal(new DateTime(2026, 3, 1), frontMatter.Date);
    Assert.Equal(&quot;observer-team&quot;, frontMatter.Author);
    Assert.Equal(&quot;A test summary&quot;, frontMatter.Summary);
    Assert.Equal([&quot;test&quot;, &quot;integration&quot;], frontMatter.Tags);
    Assert.True(frontMatter.Featured);
    Assert.Contains(&quot;## Hello&quot;, body);
}
</code></pre>
<p>This test is valuable. It verifies that the YAML front matter parser correctly extracts all fields from a well-formed markdown file. It runs in milliseconds and catches regressions instantly. But it tests the happy path with valid input. What happens when the front matter is malformed? When the date is in an unexpected format? When a field contains Unicode characters? When the YAML indentation is inconsistent? Each of these is a separate test case that someone needs to think of. The developer who wrote the parser thought of some of them. The QA engineer who tests the blog pipeline will think of others.</p>
<h3 id="integration-tests-verifying-the-seams">Integration Tests: Verifying the Seams</h3>
<p>Integration tests verify that components work together correctly. They are more expensive to write and maintain, but they catch a different category of bugs — the ones that live in the seams between components.</p>
<pre><code class="language-csharp">[Fact]
public void Rss_ContainsCategoriesFromTags()
{
    var posts = new[]
    {
        new RssPostEntry
        {
            Slug = &quot;test&quot;,
            Title = &quot;Test&quot;,
            Date = DateTime.UtcNow,
            Summary = &quot;Summary&quot;,
            Tags = [&quot;alpha&quot;, &quot;beta&quot;]
        }
    };

    var rssXml = GenerateRss(&quot;Test Blog&quot;, &quot;Desc&quot;, &quot;https://example.com&quot;, posts);

    var doc = XDocument.Parse(rssXml);
    var categories = doc.Descendants(&quot;item&quot;)
        .First()
        .Elements(&quot;category&quot;)
        .Select(c =&gt; c.Value)
        .ToArray();

    Assert.Equal([&quot;alpha&quot;, &quot;beta&quot;], categories);
}
</code></pre>
<p>This test verifies that the RSS generator correctly maps post tags to RSS category elements. It exercises the full RSS generation pipeline, including XML serialization. But it still operates on controlled data. It does not test what happens when the RSS feed is consumed by an actual RSS reader, or when the feed contains a post with a title that includes an ampersand, or when the feed is fetched over HTTP with gzip compression.</p>
<h3 id="end-to-end-tests-simulating-the-user">End-to-End Tests: Simulating the User</h3>
<p>End-to-end tests simulate real user interactions. In the Blazor WebAssembly world, tools like bUnit let you render components and assert on the resulting HTML:</p>
<pre><code class="language-csharp">[Fact]
public void BlogPage_RendersPostList()
{
    // Arrange - register services, configure HttpClient mock
    // Act - render the Blog component
    // Assert - verify the correct post titles appear in the DOM
}
</code></pre>
<p>These tests are valuable for verifying that components render correctly and respond to user interaction. But they still operate within the test harness. They do not exercise the full download-parse-render cycle of a Blazor WebAssembly application in a real browser. They do not account for network latency, browser differences, viewport sizes, or the fact that users sometimes click faster than the framework can handle.</p>
<h3 id="the-missing-layer-exploratory-testing">The Missing Layer: Exploratory Testing</h3>
<p>This is where dedicated QA shines. Exploratory testing is not random clicking. It is a disciplined practice where a tester simultaneously learns about the application, designs tests, and executes them. It is guided by experience, intuition, and a mental model of where bugs tend to hide.</p>
<p>An experienced QA engineer testing a new blog feature might:</p>
<ul>
<li>Try to publish a post with a future date and verify it does not appear</li>
<li>Create a post with a title that is 500 characters long</li>
<li>Paste formatted text from Microsoft Word into the markdown editor</li>
<li>Navigate to a blog post, hit the back button, and verify the blog index state is preserved</li>
<li>Open the same blog post in two tabs and check for inconsistencies</li>
<li>Test on a slow network connection to see how the loading state behaves</li>
<li>Rapidly switch between themes while a blog post is loading</li>
<li>Try to access a blog post URL that does not exist</li>
<li>Submit a form with JavaScript disabled</li>
<li>Test keyboard navigation for accessibility compliance</li>
</ul>
<p>No automated test suite would cover all of these scenarios unless someone first thought to write them. And the person most likely to think of them is the person whose entire job is thinking about how software can break.</p>
<h2 id="part-4-concurrency-bugs-the-qa-engineers-specialty">Part 4: Concurrency Bugs — The QA Engineer's Specialty</h2>
<p>Concurrency bugs deserve their own section because they represent the quintessential category of defect that automated tests miss and QA engineers find. They are the most insidious bugs in web development, and modern ASP.NET applications are especially vulnerable to them because of the inherent concurrency of HTTP request processing.</p>
<h3 id="why-concurrency-bugs-are-hard">Why Concurrency Bugs Are Hard</h3>
<p>Concurrency bugs are non-deterministic. They depend on the timing of thread execution, which is controlled by the operating system scheduler — not by your code. A race condition might manifest once in a thousand requests, or only under specific load conditions, or only when the garbage collector happens to pause a thread at exactly the wrong moment.</p>
<p>This non-determinism makes them nearly impossible to reproduce in a development environment where you are the only user. They pass all unit tests because unit tests run sequentially. They often pass integration tests because the test environment has less contention than production. They surface in staging or production when real users generate real concurrent load.</p>
<h3 id="a-catalog-of-common-asp.net-concurrency-bugs">A Catalog of Common ASP.NET Concurrency Bugs</h3>
<p>Here are patterns that QA engineers should know about and actively test for.</p>
<p><strong>The Double-Submit Problem.</strong> A user clicks the &quot;Submit&quot; button twice in quick succession. If the server does not implement idempotency, two records are created. This is especially dangerous for financial transactions, order placements, and any operation with real-world side effects. The fix involves a combination of client-side button disabling, server-side idempotency keys, and database-level unique constraints.</p>
<pre><code class="language-csharp">// Vulnerable: no idempotency protection
[HttpPost(&quot;orders&quot;)]
public async Task&lt;IActionResult&gt; CreateOrder(CreateOrderRequest request)
{
    var order = new Order
    {
        CustomerId = request.CustomerId,
        Items = request.Items,
        CreatedAt = DateTime.UtcNow
    };
    _db.Orders.Add(order);
    await _db.SaveChangesAsync();
    return Created($&quot;/orders/{order.Id}&quot;, order);
}

// Fixed: idempotency key prevents duplicate creation
[HttpPost(&quot;orders&quot;)]
public async Task&lt;IActionResult&gt; CreateOrder(
    [FromHeader(Name = &quot;Idempotency-Key&quot;)] string idempotencyKey,
    CreateOrderRequest request)
{
    var existing = await _db.Orders
        .FirstOrDefaultAsync(o =&gt; o.IdempotencyKey == idempotencyKey);

    if (existing is not null)
        return Ok(existing); // Return the existing order, not a duplicate

    var order = new Order
    {
        IdempotencyKey = idempotencyKey,
        CustomerId = request.CustomerId,
        Items = request.Items,
        CreatedAt = DateTime.UtcNow
    };

    _db.Orders.Add(order);
    await _db.SaveChangesAsync();
    return Created($&quot;/orders/{order.Id}&quot;, order);
}
</code></pre>
<p><strong>The Read-Modify-Write Race.</strong> This is the fund transfer example from earlier. Whenever your code reads a value, makes a decision based on that value, and then writes an updated value back, there is a window between the read and the write where another thread can change the data. In Entity Framework, the fix is optimistic concurrency control using a row version column:</p>
<pre><code class="language-csharp">public class Account
{
    public int Id { get; set; }
    public decimal Balance { get; set; }

    [Timestamp]
    public byte[] RowVersion { get; set; } = [];
}
</code></pre>
<p>With this in place, if two concurrent requests try to update the same account, one of them will get a <code>DbUpdateConcurrencyException</code>, which you can catch and retry or report to the user. The important thing is that the data stays consistent.</p>
<p><strong>The Stale Cache Thundering Herd.</strong> When a cache entry expires and many concurrent requests arrive for the same data simultaneously, all of them miss the cache and hit the underlying data source at once. This can bring down a database or overwhelm an external API. The fix is to use a cache implementation that supports lock-per-key, so only one thread refreshes the cache while others wait for the result.</p>
<p><strong>The Shared Mutable State.</strong> Any <code>static</code> field or singleton-scoped service that holds mutable state is a concurrency bug waiting to happen. In ASP.NET's dependency injection system, services registered as <code>Singleton</code> persist for the lifetime of the application and are shared across all requests. If those services hold mutable state without synchronization, you have a race condition.</p>
<pre><code class="language-csharp">// Dangerous: static mutable state with no synchronization
public class RequestCounter
{
    private static int _count = 0;

    public int Increment() =&gt; _count++; // Not thread-safe!
}

// Fixed: use Interlocked for atomic operations
public class RequestCounter
{
    private static int _count = 0;

    public int Increment() =&gt; Interlocked.Increment(ref _count);
}
</code></pre>
<h3 id="how-qa-engineers-find-concurrency-bugs">How QA Engineers Find Concurrency Bugs</h3>
<p>QA engineers find concurrency bugs through a combination of techniques:</p>
<ol>
<li><p><strong>Rapid interaction testing.</strong> Double-clicking buttons, rapidly navigating between pages, submitting forms multiple times, and using the browser's back and forward buttons aggressively.</p>
</li>
<li><p><strong>Multi-tab and multi-browser testing.</strong> Opening the same application in multiple tabs or browsers and performing conflicting operations simultaneously. This is the simplest way to simulate concurrent users.</p>
</li>
<li><p><strong>Slow network simulation.</strong> Using browser developer tools to throttle the network connection, which widens the timing windows where race conditions can occur.</p>
</li>
<li><p><strong>Load testing.</strong> Using tools like k6, JMeter, or NBomber to simulate realistic concurrent load. This is where race conditions that only appear under contention become visible.</p>
</li>
<li><p><strong>State inspection.</strong> Checking database records, cache entries, and log files after performing concurrent operations to verify that the data is consistent.</p>
</li>
<li><p><strong>Session testing.</strong> Logging in as two different users and performing operations that interact with the same data, verifying that one user's actions do not corrupt another user's experience.</p>
</li>
</ol>
<h2 id="part-5-the-economics-of-quality">Part 5: The Economics of Quality</h2>
<p>There is a widely cited claim, often attributed to IBM's Systems Sciences Institute, that a bug found in production is 100 times more expensive to fix than one found during the design phase. The original source of this specific figure has been questioned — researchers have noted that the underlying data may trace back to internal IBM training materials from the early 1980s, and the exact multiplier has never been independently verified.</p>
<p>But even if the precise number is debatable, the directional truth is not. Bugs found later in the development lifecycle are more expensive to fix. This is true for straightforward reasons that do not require an academic study to understand:</p>
<ul>
<li>A bug found during code review requires the developer to fix the code. Cost: minutes to hours.</li>
<li>A bug found during QA testing requires a bug report, a context switch for the developer, a fix, a re-test, and possibly a new build. Cost: hours to a day.</li>
<li>A bug found in production requires all of the above plus incident response, customer communication, possible data remediation, hotfix deployment, and post-incident review. Cost: days to weeks, plus reputational damage that is difficult to quantify.</li>
</ul>
<p>The Consortium for Information and Software Quality (CISQ) estimated in their 2022 report that the cost of poor software quality in the United States has reached approximately $2.41 trillion. That figure includes operational failures, software vulnerabilities, technical debt, and the direct cost of defects. Even if you discount the number heavily, the scale is sobering.</p>
<h3 id="the-qa-return-on-investment">The QA Return on Investment</h3>
<p>A dedicated QA engineer's salary is a known, fixed cost. The cost of the bugs they prevent is variable but potentially enormous. Consider:</p>
<ul>
<li>A single production outage at a mid-size company can cost tens of thousands of dollars per hour in lost revenue and customer goodwill.</li>
<li>A security vulnerability that leads to a data breach can cost millions in fines, remediation, and legal fees.</li>
<li>A series of small, annoying bugs that erode user trust can lead to churn that compounds over months, resulting in losses that dwarf the cost of a QA team.</li>
</ul>
<p>The math is not complicated. If a QA engineer prevents even one significant production incident per quarter, they have almost certainly paid for themselves. If they catch a security vulnerability before it ships, they have paid for themselves many times over.</p>
<h3 id="ai-testing-tools-are-helpful-but-not-sufficient">AI Testing Tools Are Helpful but Not Sufficient</h3>
<p>There is a growing ecosystem of AI-powered testing tools that can generate test cases, detect flaky tests, self-heal broken selectors, and prioritize test execution based on risk. These tools are genuinely useful, and teams should evaluate and adopt them where they add value.</p>
<p>But AI testing tools have the same fundamental limitation as AI coding tools: they optimize for patterns they have seen before. They are excellent at generating variations of known test scenarios. They are poor at imagining entirely new categories of failure. They cannot think about whether the user experience &quot;feels right.&quot; They cannot notice that the loading spinner disappears 200 milliseconds before the content appears, creating a disconcerting flash. They cannot tell you that the error message is technically accurate but emotionally tone-deaf.</p>
<p>In a survey of experienced testing professionals, 67 percent said they would trust AI-generated tests, but only with human review. That finding captures the state of the industry perfectly: AI is a powerful tool for QA, but it is not a replacement for QA.</p>
<h2 id="part-6-practical-recommendations-for-asp.net-teams">Part 6: Practical Recommendations for ASP.NET Teams</h2>
<p>If you are convinced that QA matters — and if the preceding five thousand words have not convinced you, the next production outage probably will — here are concrete steps you can take to strengthen quality assurance in your ASP.NET projects.</p>
<h3 id="embed-qa-in-the-development-process-not-after-it">1. Embed QA in the Development Process, Not After It</h3>
<p>The worst QA setup is the one where developers write code for two weeks, throw it over the wall to QA, and QA files a hundred bugs. This leads to a combative relationship where developers resent QA for slowing them down and QA resents developers for producing sloppy work.</p>
<p>Instead, involve QA from the beginning. Have QA engineers participate in sprint planning and review the requirements before any code is written. They will spot ambiguities, missing edge cases, and contradictory requirements that developers will not catch because developers are thinking about implementation, not usage.</p>
<h3 id="automate-the-boring-parts">2. Automate the Boring Parts</h3>
<p>There are categories of testing that machines do better than humans: regression testing, performance testing, accessibility scanning, security scanning, and API contract verification. Automate these aggressively. Use tools like:</p>
<ul>
<li><strong>xUnit and bUnit</strong> for unit and component tests in your .NET projects</li>
<li><strong>NBomber</strong> or <strong>k6</strong> for load testing</li>
<li><strong>Playwright</strong> or <strong>Selenium</strong> for browser-based end-to-end tests</li>
<li><strong>OWASP ZAP</strong> for security scanning</li>
<li><strong>axe-core</strong> or <strong>Lighthouse</strong> for accessibility auditing</li>
<li><strong>Pact</strong> or <strong>contract testing libraries</strong> for verifying API compatibility</li>
</ul>
<p>Automation frees your QA engineers to do what humans do best: think creatively about how the software can break.</p>
<h3 id="write-tests-at-every-level">3. Write Tests at Every Level</h3>
<p>In the .NET ecosystem, a healthy test suite includes:</p>
<p><strong>Unit tests</strong> that verify individual methods and classes in isolation. Register services with mock dependencies and assert on return values and state changes.</p>
<p><strong>Component tests with bUnit</strong> that render Blazor components and verify the DOM output, event handling, and component lifecycle.</p>
<pre><code class="language-csharp">[Fact]
public void Counter_IncrementButton_UpdatesCount()
{
    using var ctx = new BunitContext();
    var cut = ctx.Render&lt;Counter&gt;();

    cut.Find(&quot;button&quot;).Click();

    cut.Find(&quot;p&quot;).TextContent.MarkupMatches(&quot;Current count: 1&quot;);
}
</code></pre>
<p><strong>Integration tests</strong> that verify the content processing pipeline, RSS generation, database queries, and API endpoints.</p>
<p><strong>End-to-end tests</strong> that exercise the deployed application in a real browser, verifying navigation, routing, and full-page rendering.</p>
<h3 id="make-tests-fast-and-reliable">4. Make Tests Fast and Reliable</h3>
<p>Tests that take minutes to run get run less often. Tests that are flaky get ignored. Both outcomes are worse than having no tests at all, because they give you false confidence.</p>
<p>In our Observer Magazine project, the entire test suite runs in under ten seconds:</p>
<pre><code>dotnet test
</code></pre>
<p>This is fast enough to run after every change. If your test suite takes longer than 30 seconds, invest in making it faster. Parallelize test execution. Replace slow database tests with in-memory alternatives. Split tests into &quot;fast&quot; and &quot;slow&quot; categories and run the fast ones on every commit, the slow ones on every merge to main.</p>
<h3 id="implement-concurrency-testing-as-a-first-class-practice">5. Implement Concurrency Testing as a First-Class Practice</h3>
<p>Do not wait for concurrency bugs to find you. Actively hunt them.</p>
<p>Write tests that exercise concurrent scenarios:</p>
<pre><code class="language-csharp">[Fact]
public async Task ConcurrentTransfers_DoNotCorruptBalance()
{
    // Arrange: create an account with $1000
    var account = new Account { Balance = 1000m };
    await _db.Accounts.AddAsync(account);
    await _db.SaveChangesAsync();

    // Act: attempt 100 concurrent $10 transfers
    var tasks = Enumerable.Range(0, 100)
        .Select(_ =&gt; TransferAsync(account.Id, 10m));

    await Task.WhenAll(tasks);

    // Assert: balance should never go negative
    await _db.Entry(account).ReloadAsync();
    Assert.True(account.Balance &gt;= 0);
}
</code></pre>
<p>This kind of test will not catch every race condition — the timing is still somewhat controlled — but it catches many of them and serves as a regression guard once a concurrency bug is fixed.</p>
<h3 id="use-opentelemetry-to-make-bugs-visible">6. Use OpenTelemetry to Make Bugs Visible</h3>
<p>Structured logging and distributed tracing make bugs easier to find and faster to diagnose. In a .NET application, OpenTelemetry integration gives you visibility into request timing, exception rates, and dependency failures.</p>
<p>When a QA engineer reports a bug, having detailed traces and structured logs means the developer can reproduce the conditions precisely rather than guessing. This reduces the back-and-forth between QA and development and shortens the fix cycle.</p>
<h3 id="test-the-unhappy-paths">7. Test the Unhappy Paths</h3>
<p>It is human nature to test that the software works when used correctly. The most valuable testing verifies what happens when it is used incorrectly. Every API endpoint should be tested with:</p>
<ul>
<li>Missing required fields</li>
<li>Fields with the wrong data type</li>
<li>Fields with boundary values (zero, negative, maximum integer, empty string, very long strings)</li>
<li>Malformed JSON</li>
<li>Missing or expired authentication tokens</li>
<li>Requests that exceed rate limits</li>
<li>Concurrent requests that create conflicting state</li>
</ul>
<h3 id="create-a-bug-taxonomy">8. Create a Bug Taxonomy</h3>
<p>Track not just the bugs you find, but the categories they fall into. Over time, you will discover patterns. Maybe your team consistently introduces concurrency bugs in services that use caching. Maybe your API validation is always missing edge cases for date fields. Maybe your Blazor components break when the user navigates away during an async operation.</p>
<p>Once you know the patterns, you can create targeted checklists, automated checks, and training materials that prevent the same categories of bugs from recurring. This is how QA transforms from a reactive function (finding bugs) to a proactive one (preventing bugs).</p>
<h2 id="part-7-the-human-element">Part 7: The Human Element</h2>
<p>There is one more dimension to QA that is rarely discussed in technical articles, and it may be the most important one: QA engineers represent the user's voice inside the development team.</p>
<p>Developers are incentivized to ship features. Product managers are incentivized to hit deadlines. Designers are incentivized to create beautiful interfaces. QA engineers are the only team members whose primary incentive is to make sure the software actually works for the person using it. They are the user's advocate, the skeptic in the room, the person who asks &quot;what happens if...&quot; when everyone else is celebrating a green build.</p>
<p>This advocacy role extends beyond bug finding. A good QA engineer will:</p>
<ul>
<li>Push back on unrealistic timelines that leave no room for testing</li>
<li>Flag when requirements are ambiguous and likely to produce bugs</li>
<li>Advocate for accessibility and internationalization</li>
<li>Insist on testing with realistic data, not just the three sample records in the dev database</li>
<li>Remind the team that &quot;works on my machine&quot; is not the same as &quot;works&quot;</li>
</ul>
<p>In an era where AI can generate code faster than humans can review it, where pull request volume is skyrocketing, and where the pressure to ship quickly has never been more intense, this advocacy role is not just nice to have. It is essential.</p>
<h2 id="part-8-qa-in-the-age-of-ai-a-practical-framework">Part 8: QA in the Age of AI — A Practical Framework</h2>
<p>The relationship between AI and QA is not adversarial. The teams that will thrive are those that use AI tools to augment their QA process, not replace it. Here is a practical framework.</p>
<h3 id="let-ai-generate-let-humans-verify">Let AI Generate, Let Humans Verify</h3>
<p>Use AI tools to generate initial test cases from requirements. Have QA engineers review, refine, and augment those test cases with edge cases and scenarios that the AI missed. This is faster than writing every test from scratch and more reliable than trusting AI-generated tests blindly.</p>
<h3 id="use-ai-for-regression-humans-for-exploration">Use AI for Regression, Humans for Exploration</h3>
<p>Automated regression suites — whether AI-generated or hand-written — are excellent at verifying that existing functionality still works. They are poor at discovering new categories of bugs. Reserve human QA effort for exploratory testing, usability testing, and testing new features where the bug landscape is unknown.</p>
<h3 id="monitor-ai-generated-code-more-closely">Monitor AI-Generated Code More Closely</h3>
<p>Some QA teams are creating specialized checklists for reviewing code written by AI models rather than people, since AI-produced code can contain subtle patterns that differ from human-written code. This is a good practice. AI-generated code tends to have specific failure modes: incorrect error handling, missing edge cases, naive concurrency assumptions, and over-reliance on patterns that were common in training data but are not appropriate for the current context.</p>
<h3 id="invest-in-qa-tooling-not-just-developer-tooling">Invest in QA Tooling, Not Just Developer Tooling</h3>
<p>Fifty percent of organizations struggle to fund the automation tools they already need for QA, even as budgets flow overwhelmingly toward developer productivity tools and AI infrastructure. This imbalance is dangerous. If you are investing in tools that help developers produce code faster, you must also invest in tools that help QA verify that code faster. Otherwise, you are building a pipeline that generates bugs more efficiently.</p>
<h2 id="conclusion-slow-down-to-speed-up">Conclusion: Slow Down to Speed Up</h2>
<p>There is a paradox at the heart of software quality: slowing down to test thoroughly actually speeds up delivery over time. Teams that skip QA ship faster in the short term but spend more time on bug fixes, hotfixes, incident response, and customer support in the long term. Teams that invest in QA ship slightly slower in the short term but spend less time on rework, enjoy higher customer satisfaction, and build a codebase that is easier to extend and maintain.</p>
<p>This paradox becomes even more pronounced in the age of AI-generated code. When code is being produced at 76 percent higher volume, when change failure rates are climbing by 30 percent, and when the code itself is generated by models that optimize for plausibility rather than correctness, the need for human verification has never been greater.</p>
<p>The four clicks that brought down our staging environment were not a failure of our test suite. They were not a failure of our code review process. They were not a failure of our CI pipeline. They were a reminder that software is used by human beings who do unpredictable things, and the best way to catch unpredictable bugs is to have a human being whose job is to think unpredictably.</p>
<p>QA is not a luxury. It is not a line item to cut when budgets are tight. It is not a phase you can skip when the deadline is approaching. In a world where AI can write code faster than humans can read it, QA is the last line of defense between your users and an avalanche of untested code.</p>
<p>Invest in it. Respect it. And whatever you do, do not ship without it.</p>
]]></content:encoded>
      <category>qa</category>
      <category>testing</category>
      <category>dotnet</category>
      <category>aspnet</category>
      <category>software-engineering</category>
      <category>ai</category>
      <category>best-practices</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>PostgreSQL, Npgsql, and Open-Source IDEs: The Definitive Guide for .NET Developers on Linux</title>
      <link>https://observermagazine.github.io/blog/postgresql-npgsql-comprehensive-guide</link>
      <description>A comprehensive, leave-no-stone-unturned guide to PostgreSQL 17 and 18, Npgsql with Dapper and EF Core, terminal workflows, configuration, transactions, networking, sessions, debugging, Docker/Podman setup, and every free open-source IDE available — all from the perspective of a .NET C# ASP.NET web developer working on Linux.</description>
      <pubDate>Sat, 28 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/postgresql-npgsql-comprehensive-guide</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>If you are a .NET developer who has spent most of your career working with SQL Server on Windows, PostgreSQL can feel like a different world. The terminology is different, the tooling is different, the configuration is different, and even the philosophical approach to certain problems diverges significantly from what you are used to. This guide is written to bridge that gap completely.</p>
<p>We are going to cover everything. Not some things. Everything. From installing PostgreSQL on bare metal Linux, a VPS, or a Docker/Podman container, to configuring it for development and production, to writing queries in the terminal, to connecting from .NET using Npgsql with both Dapper and Entity Framework Core, to understanding transactions, isolation levels, locking, connection pooling, session management, networking, debugging, and monitoring. We will also survey every free and open-source IDE and GUI tool available on Linux for working with PostgreSQL.</p>
<p>This article assumes you are running Linux (Fedora, Ubuntu, Debian, Arch, or a similar distribution). It assumes you know C# and have worked with ASP.NET. It does not assume any prior PostgreSQL experience.</p>
<p>Let us begin.</p>
<h2 id="part-1-what-is-postgresql-and-why-should-you-care">Part 1: What Is PostgreSQL and Why Should You Care?</h2>
<p>PostgreSQL is a free, open-source, object-relational database management system. It has been under active development since 1986, originating from the POSTGRES project at the University of California, Berkeley. The &quot;SQL&quot; was appended to the name in 1996 when SQL language support was added, and the project has been community-driven ever since.</p>
<p>PostgreSQL is not owned by any corporation. There is no &quot;PostgreSQL Inc.&quot; that controls the project. It is developed by a global community of contributors under the PostgreSQL Global Development Group. The license is the PostgreSQL License, which is a permissive open-source license similar to BSD and MIT. You can use PostgreSQL for any purpose, including commercial, without paying anyone anything, ever. There are no &quot;community editions&quot; versus &quot;enterprise editions.&quot; There is one PostgreSQL, and it is free.</p>
<p>As of early 2026, PostgreSQL has surpassed MySQL as the most widely used database among developers, with roughly 55% usage in developer surveys. Every major cloud provider offers managed PostgreSQL services: Amazon RDS and Aurora PostgreSQL, Azure Database for PostgreSQL, Google Cloud SQL for PostgreSQL, and many others. But you do not need to use any cloud service. PostgreSQL runs perfectly well on a single Linux machine, a Raspberry Pi, or a $5/month VPS.</p>
<p>For .NET developers specifically, PostgreSQL is compelling because the .NET ecosystem has first-class support for it through Npgsql, the open-source ADO.NET data provider. Npgsql consistently ranks among the top performers on the TechEmpower Web Framework Benchmarks. Entity Framework Core has an official PostgreSQL provider maintained by the Npgsql team. Dapper works flawlessly with Npgsql. There is no technical reason to avoid PostgreSQL in a .NET application.</p>
<h3 id="postgresql-vs.sql-server-key-philosophical-differences">PostgreSQL vs. SQL Server: Key Philosophical Differences</h3>
<p>Before we dive into specifics, you need to understand a few philosophical differences between PostgreSQL and SQL Server:</p>
<p>PostgreSQL uses Multi-Version Concurrency Control (MVCC) as its fundamental concurrency mechanism. Every transaction sees a snapshot of the data as it existed at the start of the transaction. Readers never block writers, and writers never block readers. This is fundamentally different from SQL Server's default behavior, where readers acquire shared locks that can block writers. SQL Server added MVCC-like behavior later through Read Committed Snapshot Isolation (RCSI) and Snapshot Isolation, but these are opt-in features. In PostgreSQL, MVCC is the default and only model.</p>
<p>PostgreSQL does not have a concept equivalent to SQL Server's <code>NOLOCK</code> hint, and you should not miss it. The entire <code>NOLOCK</code> pattern exists in SQL Server because its default isolation level (Read Committed with locking) causes readers to block writers. Since PostgreSQL uses MVCC by default, readers never block writers, so the problem <code>NOLOCK</code> solves simply does not exist. We will discuss this in much more detail in the transactions section.</p>
<p>PostgreSQL is case-sensitive for identifiers by default, but it lowercases unquoted identifiers. If you write <code>CREATE TABLE MyTable</code>, PostgreSQL stores it as <code>mytable</code>. If you want mixed-case identifiers, you must double-quote them: <code>CREATE TABLE &quot;MyTable&quot;</code>. The strong convention in the PostgreSQL world is to use <code>snake_case</code> for everything: table names, column names, function names. Embrace this convention.</p>
<p>PostgreSQL uses schemas differently than SQL Server. In SQL Server, <code>dbo</code> is the default schema and many teams barely think about schemas. In PostgreSQL, <code>public</code> is the default schema, but the schema system is powerful and you should use it to organize your database objects.</p>
<h2 id="part-2-installing-postgresql-on-linux">Part 2: Installing PostgreSQL on Linux</h2>
<h3 id="bare-metal-vps-installation">Bare Metal / VPS Installation</h3>
<p>On Fedora or RHEL-based systems:</p>
<pre><code class="language-bash"># Install PostgreSQL 18 (latest stable as of March 2026)
sudo dnf install postgresql18-server postgresql18

# Initialize the database cluster
sudo postgresql-18-setup --initdb

# Start and enable the service
sudo systemctl start postgresql-18
sudo systemctl enable postgresql-18
</code></pre>
<p>On Ubuntu or Debian-based systems:</p>
<pre><code class="language-bash"># Add the official PostgreSQL APT repository
sudo sh -c 'echo &quot;deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main&quot; &gt; /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update

# Install PostgreSQL 18
sudo apt-get install postgresql-18

# The service starts automatically on Debian/Ubuntu
sudo systemctl status postgresql
</code></pre>
<p>On Arch Linux:</p>
<pre><code class="language-bash">sudo pacman -S postgresql

# Initialize the data directory
sudo -u postgres initdb -D /var/lib/postgres/data

# Start and enable
sudo systemctl start postgresql
sudo systemctl enable postgresql
</code></pre>
<p>After installation, PostgreSQL creates a system user called <code>postgres</code>. This user is the default superuser. To connect for the first time:</p>
<pre><code class="language-bash"># Switch to the postgres user
sudo -u postgres psql

# You are now in the psql shell as the superuser
# Create a database and user for your application
CREATE USER myapp WITH PASSWORD 'my-secure-password';
CREATE DATABASE myappdb OWNER myapp;

# Grant connect privilege
GRANT CONNECT ON DATABASE myappdb TO myapp;

# Exit
\q
</code></pre>
<h3 id="docker-installation">Docker Installation</h3>
<p>Docker is the quickest way to get PostgreSQL running for development:</p>
<pre><code class="language-bash"># Pull the official PostgreSQL 18 image
docker pull postgres:18

# Run a container
docker run -d \
  --name pg-dev \
  -e POSTGRES_USER=myapp \
  -e POSTGRES_PASSWORD=my-secure-password \
  -e POSTGRES_DB=myappdb \
  -p 5432:5432 \
  -v pgdata:/var/lib/postgresql/data \
  postgres:18

# Connect using psql from the host
psql -h localhost -U myapp -d myappdb

# Or connect from inside the container
docker exec -it pg-dev psql -U myapp -d myappdb
</code></pre>
<p>The <code>-v pgdata:/var/lib/postgresql/data</code> flag creates a named Docker volume so your data persists across container restarts and removals. Without it, you lose all data when the container is removed.</p>
<h3 id="podman-installation">Podman Installation</h3>
<p>Podman is a daemonless container engine that is often preferred on Fedora and RHEL systems. It is a drop-in replacement for Docker:</p>
<pre><code class="language-bash"># Pull and run (identical syntax to Docker)
podman run -d \
  --name pg-dev \
  -e POSTGRES_USER=myapp \
  -e POSTGRES_PASSWORD=my-secure-password \
  -e POSTGRES_DB=myappdb \
  -p 5432:5432 \
  -v pgdata:/var/lib/postgresql/data \
  docker.io/library/postgres:18

# Connect
podman exec -it pg-dev psql -U myapp -d myappdb
</code></pre>
<p>If you want to run PostgreSQL as a rootless Podman container that starts on boot:</p>
<pre><code class="language-bash"># Generate a systemd user service
podman generate systemd --name pg-dev --files --new
mkdir -p ~/.config/systemd/user/
mv container-pg-dev.service ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable container-pg-dev.service
systemctl --user start container-pg-dev.service

# Enable lingering so it starts on boot even without login
loginctl enable-linger $USER
</code></pre>
<h3 id="docker-compose-for-development">Docker Compose for Development</h3>
<p>For a more complete development setup, use a <code>docker-compose.yml</code>:</p>
<pre><code class="language-yaml">services:
  db:
    image: postgres:18
    restart: unless-stopped
    environment:
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: my-secure-password
      POSTGRES_DB: myappdb
    ports:
      - &quot;5432:5432&quot;
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: [&quot;CMD-SHELL&quot;, &quot;pg_isready -U myapp -d myappdb&quot;]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  pgdata:
</code></pre>
<p>Any <code>.sql</code> or <code>.sh</code> files placed in <code>/docker-entrypoint-initdb.d/</code> inside the container are executed when the database is initialized for the first time.</p>
<h2 id="part-3-configuring-postgresql">Part 3: Configuring PostgreSQL</h2>
<p>PostgreSQL's configuration lives in two primary files: <code>postgresql.conf</code> and <code>pg_hba.conf</code>. Understanding both is essential.</p>
<h3 id="finding-the-configuration-files">Finding the Configuration Files</h3>
<pre><code class="language-sql">-- Inside psql, find the config file locations
SHOW config_file;
-- Example: /var/lib/postgresql/data/postgresql.conf

SHOW hba_file;
-- Example: /var/lib/postgresql/data/pg_hba.conf

SHOW data_directory;
-- Example: /var/lib/postgresql/data
</code></pre>
<p>On a Docker container, these are at <code>/var/lib/postgresql/data/</code>. On a bare-metal Fedora install, they are typically at <code>/var/lib/pgsql/18/data/</code>. On Ubuntu, they are at <code>/etc/postgresql/18/main/</code>.</p>
<h3 id="postgresql.conf-the-main-configuration-file">postgresql.conf: The Main Configuration File</h3>
<p>This file controls everything about how PostgreSQL runs. Here are the settings you need to understand:</p>
<p><strong>Connection Settings:</strong></p>
<pre><code class="language-ini"># Listen on all interfaces (default is localhost only)
listen_addresses = '*'          # For development; restrict in production

# Maximum concurrent connections
max_connections = 100           # Default is 100; tune based on workload

# Port (default 5432)
port = 5432
</code></pre>
<p><strong>Memory Settings:</strong></p>
<pre><code class="language-ini"># Shared memory for caching data pages
# Rule of thumb: 25% of total system RAM
shared_buffers = 2GB            # Default is 128MB — far too low

# Memory for sorting, hashing, and other operations per query
work_mem = 64MB                 # Default 4MB; increase for complex queries

# Memory for maintenance operations (VACUUM, CREATE INDEX)
maintenance_work_mem = 512MB    # Default 64MB

# OS page cache hint
effective_cache_size = 6GB      # 50-75% of total RAM; helps query planner
</code></pre>
<p><strong>Write-Ahead Log (WAL) Settings:</strong></p>
<pre><code class="language-ini"># WAL level (minimal, replica, or logical)
wal_level = replica             # Needed for replication and point-in-time recovery

# Checkpoint settings
checkpoint_completion_target = 0.9
max_wal_size = 2GB
min_wal_size = 80MB
</code></pre>
<p><strong>Query Planner Settings:</strong></p>
<pre><code class="language-ini"># Cost estimates for planner decisions
random_page_cost = 1.1          # Lower if using SSDs (default 4.0 assumes HDDs)
effective_io_concurrency = 200  # Higher for SSDs; default 1

# PostgreSQL 18: Asynchronous I/O method
io_method = worker              # 'worker' (all platforms), 'io_uring' (Linux), 'sync' (legacy)
</code></pre>
<p><strong>Logging:</strong></p>
<pre><code class="language-ini"># Log destination
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

# What to log
log_min_duration_statement = 500    # Log queries taking &gt; 500ms
log_statement = 'none'              # 'none', 'ddl', 'mod', or 'all'
log_line_prefix = '%t [%p] %u@%d '  # Timestamp, PID, user@database

# Log slow queries with their execution plans
auto_explain.log_min_duration = 1000  # Requires loading auto_explain extension
</code></pre>
<p><strong>Development vs. Production:</strong></p>
<p>For development, you might use more aggressive logging:</p>
<pre><code class="language-ini">log_statement = 'all'
log_min_duration_statement = 0
log_connections = on
log_disconnections = on
</code></pre>
<p>For production, you want to log only what matters:</p>
<pre><code class="language-ini">log_statement = 'ddl'
log_min_duration_statement = 1000
log_connections = off
log_disconnections = off
</code></pre>
<h3 id="pg_hba.conf-client-authentication-configuration">pg_hba.conf: Client Authentication Configuration</h3>
<p>This file controls who can connect to your database and how they authenticate. Each line specifies a connection type, database, user, address, and authentication method.</p>
<pre><code># TYPE  DATABASE    USER        ADDRESS         METHOD

# Local connections (Unix socket)
local   all         postgres                    peer
local   all         all                         scram-sha-256

# IPv4 local connections
host    all         all         127.0.0.1/32    scram-sha-256

# IPv4 remote connections (restrict in production)
host    all         all         0.0.0.0/0       scram-sha-256

# IPv6 local connections
host    all         all         ::1/128         scram-sha-256
</code></pre>
<p>Authentication methods you should know:</p>
<p><code>peer</code> uses the operating system username. If you are logged in as the Linux user <code>postgres</code>, you can connect to the <code>postgres</code> database role without a password. This only works for local (Unix socket) connections.</p>
<p><code>scram-sha-256</code> is the modern password authentication method. It is significantly more secure than the older <code>md5</code> method. PostgreSQL 18 has deprecated MD5 authentication, and it will be removed in a future release. Always use SCRAM.</p>
<p><code>reject</code> denies the connection. Useful for explicitly blocking certain combinations.</p>
<p><code>cert</code> requires a TLS client certificate. Used in high-security environments.</p>
<p>After editing <code>pg_hba.conf</code>, you must reload the configuration:</p>
<pre><code class="language-bash">sudo systemctl reload postgresql-18
# Or from inside psql:
SELECT pg_reload_conf();
</code></pre>
<h3 id="configuration-for-docker-containers">Configuration for Docker Containers</h3>
<p>When running PostgreSQL in Docker, you can pass configuration parameters at startup:</p>
<pre><code class="language-bash">docker run -d \
  --name pg-dev \
  -e POSTGRES_PASSWORD=secret \
  -p 5432:5432 \
  postgres:18 \
  -c shared_buffers=512MB \
  -c work_mem=32MB \
  -c max_connections=200
</code></pre>
<p>Or mount a custom configuration file:</p>
<pre><code class="language-bash">docker run -d \
  --name pg-dev \
  -e POSTGRES_PASSWORD=secret \
  -p 5432:5432 \
  -v ./my-postgresql.conf:/etc/postgresql/postgresql.conf \
  postgres:18 \
  -c config_file=/etc/postgresql/postgresql.conf
</code></pre>
<h2 id="part-4-the-terminal-psql-and-beyond">Part 4: The Terminal — psql and Beyond</h2>
<h3 id="psql-the-standard-client">psql: The Standard Client</h3>
<p><code>psql</code> is PostgreSQL's interactive terminal. It is the equivalent of <code>sqlcmd</code> for SQL Server, but far more capable. Every PostgreSQL developer should be fluent with psql.</p>
<p><strong>Connecting:</strong></p>
<pre><code class="language-bash"># Connect to a local database
psql -U myapp -d myappdb

# Connect to a remote server
psql -h 192.168.1.100 -p 5432 -U myapp -d myappdb

# Using a connection string
psql &quot;host=192.168.1.100 port=5432 dbname=myappdb user=myapp password=secret sslmode=require&quot;

# Using a URI
psql postgresql://myapp:secret@192.168.1.100:5432/myappdb?sslmode=require
</code></pre>
<p><strong>Essential Meta-Commands:</strong></p>
<pre><code>\l          List all databases
\c dbname   Connect to a different database
\dt         List tables in current schema
\dt+        List tables with sizes
\d table    Describe a table (columns, types, constraints)
\d+ table   Describe with additional detail (storage, description)
\di         List indexes
\df         List functions
\dv         List views
\dn         List schemas
\du         List roles/users
\dp         List table privileges
\x          Toggle expanded display (vertical output)
\timing     Toggle query timing display
\e          Open query in $EDITOR
\i file.sql Execute SQL from a file
\o file.txt Send output to a file
\q          Quit
</code></pre>
<p><strong>Running SQL from the Command Line:</strong></p>
<pre><code class="language-bash"># Execute a single command
psql -U myapp -d myappdb -c &quot;SELECT count(*) FROM users;&quot;

# Execute a SQL file
psql -U myapp -d myappdb -f migrations/001-create-tables.sql

# Execute and get CSV output
psql -U myapp -d myappdb -c &quot;COPY (SELECT * FROM users) TO STDOUT WITH CSV HEADER;&quot;

# Pipe SQL from stdin
echo &quot;SELECT now();&quot; | psql -U myapp -d myappdb
</code></pre>
<p><strong>Environment Variables:</strong></p>
<p>You can avoid typing credentials repeatedly by setting environment variables:</p>
<pre><code class="language-bash">export PGHOST=localhost
export PGPORT=5432
export PGUSER=myapp
export PGPASSWORD=my-secure-password
export PGDATABASE=myappdb

# Now just type:
psql
</code></pre>
<p>For a more secure approach, use a <code>.pgpass</code> file:</p>
<pre><code class="language-bash"># Create ~/.pgpass with format: hostname:port:database:username:password
echo &quot;localhost:5432:myappdb:myapp:my-secure-password&quot; &gt; ~/.pgpass
chmod 600 ~/.pgpass
</code></pre>
<h3 id="pgcli-a-better-terminal-experience">pgcli: A Better Terminal Experience</h3>
<p><code>pgcli</code> is a drop-in replacement for psql with intelligent autocompletion and syntax highlighting:</p>
<pre><code class="language-bash"># Install via pip
pip install pgcli

# Or on Fedora
sudo dnf install pgcli

# Or on Ubuntu
sudo apt install pgcli

# Use exactly like psql
pgcli -U myapp -d myappdb
</code></pre>
<p>pgcli provides real-time autocomplete for table names, column names, SQL keywords, and even suggests JOINs based on foreign key relationships. If you spend any time in the terminal, install pgcli immediately.</p>
<h2 id="part-5-postgresql-17-and-18-what-is-new">Part 5: PostgreSQL 17 and 18 — What Is New</h2>
<h3 id="postgresql-17-released-september-26-2024">PostgreSQL 17 (Released September 26, 2024)</h3>
<p>PostgreSQL 17 delivered major performance improvements. The vacuum subsystem received a complete memory management overhaul, reducing memory consumption by up to 20x. This means autovacuum runs more efficiently, keeping your tables healthy with less resource contention. Bulk loading and exporting via the <code>COPY</code> command saw up to 2x performance improvements for large rows.</p>
<p>The <code>JSON_TABLE</code> function arrived, letting you convert JSON data directly into a relational table representation within SQL:</p>
<pre><code class="language-sql">SELECT *
FROM JSON_TABLE(
    '[{&quot;name&quot;: &quot;Alice&quot;, &quot;age&quot;: 30}, {&quot;name&quot;: &quot;Bob&quot;, &quot;age&quot;: 25}]'::jsonb,
    '$[*]'
    COLUMNS (
        name TEXT PATH '$.name',
        age INT PATH '$.age'
    )
) AS jt;
</code></pre>
<p>The <code>MERGE</code> statement gained a <code>RETURNING</code> clause, and views became updatable via <code>MERGE</code>. The <code>COPY</code> command added an <code>ON_ERROR</code> option that allows imports to continue even when individual rows fail. Logical replication received failover slot synchronization, enabling high-availability setups to maintain replication through primary failovers. Incremental backups landed natively via <code>pg_basebackup --incremental</code>, with <code>pg_combinebackup</code> for restoration. Direct SSL connections became possible with the <code>sslnegotiation=direct</code> client option, saving a roundtrip during connection establishment.</p>
<h3 id="postgresql-18-released-september-25-2025">PostgreSQL 18 (Released September 25, 2025)</h3>
<p>PostgreSQL 18 is a landmark release. The headline feature is the Asynchronous I/O (AIO) subsystem, which fundamentally changes how PostgreSQL handles read operations. Instead of issuing synchronous I/O calls and waiting for each to complete, PostgreSQL 18 can issue multiple I/O requests concurrently. Benchmarks demonstrate up to 3x performance improvements for sequential scans, bitmap heap scans, and vacuum operations.</p>
<pre><code class="language-sql">-- Configure the AIO method
SET io_method = 'worker';     -- Worker-based (all platforms)
SET io_method = 'io_uring';   -- io_uring (Linux only, fastest)
SET io_method = 'sync';       -- Traditional synchronous I/O
</code></pre>
<p>Native UUIDv7 support arrived via the <code>uuidv7()</code> function. UUIDv7 combines global uniqueness with timestamp-based ordering, making it ideal for primary keys because the sequential nature provides excellent B-tree index performance:</p>
<pre><code class="language-sql">-- Generate a timestamp-ordered UUID
SELECT uuidv7();
-- Result: 01980de8-ad3d-715c-b739-faf2bb1a7aad

-- Extract the embedded timestamp
SELECT uuid_extract_timestamp(uuidv7());

-- Use as a primary key
CREATE TABLE orders (
    id UUID PRIMARY KEY DEFAULT uuidv7(),
    customer_id INT NOT NULL,
    total DECIMAL(10,2) NOT NULL,
    created_at TIMESTAMPTZ DEFAULT now()
);
</code></pre>
<p>Virtual generated columns became the default. Unlike stored generated columns (which write computed values to disk), virtual columns compute their values on-the-fly during reads:</p>
<pre><code class="language-sql">CREATE TABLE invoices (
    id SERIAL PRIMARY KEY,
    subtotal DECIMAL(10,2),
    tax_rate DECIMAL(5,4) DEFAULT 0.0875,
    -- Virtual by default: computed at read time, no disk storage
    total DECIMAL(10,2) GENERATED ALWAYS AS (subtotal * (1 + tax_rate))
);
</code></pre>
<p>The <code>RETURNING</code> clause was enhanced with <code>OLD</code> and <code>NEW</code> references for <code>INSERT</code>, <code>UPDATE</code>, <code>DELETE</code>, and <code>MERGE</code>:</p>
<pre><code class="language-sql">-- See both old and new values in a single UPDATE
UPDATE products
SET price = price * 1.10
WHERE category = 'electronics'
RETURNING name, old.price AS was, new.price AS now;
</code></pre>
<p>Temporal constraints allow defining non-overlapping constraints on range types, ideal for scheduling and reservation systems:</p>
<pre><code class="language-sql">CREATE TABLE room_bookings (
    room_id INT,
    booked_during TSTZRANGE,
    guest TEXT,
    PRIMARY KEY (room_id, booked_during WITHOUT OVERLAPS)
);
</code></pre>
<p>OAuth 2.0 authentication support was added, enabling integration with modern identity providers. MD5 password authentication was deprecated in favor of SCRAM-SHA-256. The <code>pg_upgrade</code> utility now preserves planner statistics during major version upgrades, eliminating the performance dip that previously occurred while <code>ANALYZE</code> rebuilt statistics. Skip scan lookups on multicolumn B-tree indexes allow queries that omit leading index columns to still benefit from the index.</p>
<p><code>EXPLAIN ANALYZE</code> now automatically includes buffer usage statistics (previously required <code>BUFFERS</code> option), and verbose output includes WAL writes, CPU time, and average read times.</p>
<h2 id="part-6-npgsql-the.net-data-provider">Part 6: Npgsql — The .NET Data Provider</h2>
<p>Npgsql is the open-source ADO.NET data provider for PostgreSQL. It is licensed under the PostgreSQL License (permissive, like MIT). The latest major version is Npgsql 10.x, which targets .NET 10.</p>
<h3 id="installation">Installation</h3>
<pre><code class="language-bash">dotnet add package Npgsql
</code></pre>
<p>Or in your <code>Directory.Packages.props</code> for central package management:</p>
<pre><code class="language-xml">&lt;PackageVersion Include=&quot;Npgsql&quot; Version=&quot;10.0.2&quot; /&gt;
</code></pre>
<h3 id="basic-usage-with-npgsqldatasource">Basic Usage with NpgsqlDataSource</h3>
<p>Modern Npgsql (version 7+) uses <code>NpgsqlDataSource</code> as the preferred entry point. It manages connection pooling, configuration, and type mapping:</p>
<pre><code class="language-csharp">using Npgsql;

var connString = &quot;Host=localhost;Port=5432;Database=myappdb;Username=myapp;Password=secret&quot;;
var dataSourceBuilder = new NpgsqlDataSourceBuilder(connString);
await using var dataSource = dataSourceBuilder.Build();

// Get a connection from the pool
await using var conn = await dataSource.OpenConnectionAsync();

// Execute a query
await using var cmd = new NpgsqlCommand(&quot;SELECT id, name, email FROM users WHERE active = @active&quot;, conn);
cmd.Parameters.AddWithValue(&quot;active&quot;, true);

await using var reader = await cmd.ExecuteReaderAsync();
while (await reader.ReadAsync())
{
    var id = reader.GetInt32(0);
    var name = reader.GetString(1);
    var email = reader.GetString(2);
    Console.WriteLine($&quot;{id}: {name} ({email})&quot;);
}
</code></pre>
<h3 id="connection-string-parameters-you-should-know">Connection String Parameters You Should Know</h3>
<pre><code>Host=localhost           Server hostname or IP
Port=5432                Server port
Database=myappdb         Database name
Username=myapp           Database user
Password=secret          Password
SSL Mode=Prefer          None, Prefer, Require, VerifyCA, VerifyFull
Pooling=true             Enable connection pooling (default: true)
Minimum Pool Size=0      Minimum idle connections
Maximum Pool Size=100    Maximum concurrent connections
Connection Idle Lifetime=300   Seconds before idle connection is closed
Timeout=15               Connection timeout in seconds
Command Timeout=30       Default command timeout in seconds
Include Error Detail=true  Include server error details (dev only)
</code></pre>
<p>For production, always use SSL:</p>
<pre><code>Host=db.example.com;Database=prod;Username=app;Password=secret;SSL Mode=VerifyFull;Trust Server Certificate=false
</code></pre>
<h3 id="npgsql-with-dependency-injection-in-asp.net">Npgsql with Dependency Injection in ASP.NET</h3>
<pre><code class="language-csharp">// In Program.cs
builder.Services.AddNpgsqlDataSource(
    builder.Configuration.GetConnectionString(&quot;DefaultConnection&quot;)!,
    dataSourceBuilder =&gt;
    {
        dataSourceBuilder.UseNodaTime();       // Optional: NodaTime date/time types
        dataSourceBuilder.MapEnum&lt;OrderStatus&gt;(&quot;order_status&quot;); // Map PostgreSQL enums
    }
);
</code></pre>
<p>This registers <code>NpgsqlDataSource</code> as a singleton in the DI container. Inject it anywhere:</p>
<pre><code class="language-csharp">public class UserRepository(NpgsqlDataSource dataSource)
{
    public async Task&lt;User?&gt; GetByIdAsync(int id)
    {
        await using var conn = await dataSource.OpenConnectionAsync();
        await using var cmd = new NpgsqlCommand(&quot;SELECT id, name, email FROM users WHERE id = @id&quot;, conn);
        cmd.Parameters.AddWithValue(&quot;id&quot;, id);

        await using var reader = await cmd.ExecuteReaderAsync();
        if (await reader.ReadAsync())
        {
            return new User(reader.GetInt32(0), reader.GetString(1), reader.GetString(2));
        }
        return null;
    }
}
</code></pre>
<h3 id="key-npgsql-9.0-and-10.0-features">Key Npgsql 9.0 and 10.0 Features</h3>
<p>Npgsql 9.0 dropped .NET Standard 2.0 support (and thus .NET Framework). If you need .NET Framework, stay on Npgsql 8.x.</p>
<p>Npgsql 9.0 introduced UUIDv7 generation for EF Core key values by default. When EF Core generates <code>Guid</code> keys client-side, Npgsql 9.0+ uses sequential UUIDv7 instead of random UUIDv4, improving index performance significantly.</p>
<p>Direct SSL support was added for PostgreSQL 17+, saving a roundtrip when establishing secure connections. Enable it with <code>SslNegotiation=direct</code> in your connection string.</p>
<p>OpenTelemetry tracing was improved with a <code>ConfigureTracing</code> API that lets you filter which commands are traced, add custom tags to spans, and control span naming.</p>
<p>Npgsql 10.0 (latest as of March 2026) targets .NET 10 and is considering deprecating synchronous APIs (<code>NpgsqlConnection.Open</code>, <code>NpgsqlCommand.ExecuteNonQuery</code>, etc.) in a future release. The recommendation is to use async APIs everywhere: <code>OpenAsync</code>, <code>ExecuteNonQueryAsync</code>, <code>ExecuteReaderAsync</code>.</p>
<h2 id="part-7-npgsql-with-dapper">Part 7: Npgsql with Dapper</h2>
<p>Dapper is a lightweight micro-ORM that extends <code>IDbConnection</code> with extension methods for mapping query results to objects. It works beautifully with Npgsql.</p>
<h3 id="installation-1">Installation</h3>
<pre><code class="language-bash">dotnet add package Dapper
</code></pre>
<h3 id="basic-queries">Basic Queries</h3>
<pre><code class="language-csharp">using Dapper;
using Npgsql;

public class ProductRepository(NpgsqlDataSource dataSource)
{
    public async Task&lt;IEnumerable&lt;Product&gt;&gt; GetAllAsync()
    {
        await using var conn = await dataSource.OpenConnectionAsync();
        return await conn.QueryAsync&lt;Product&gt;(&quot;SELECT id, name, price, stock FROM products ORDER BY name&quot;);
    }

    public async Task&lt;Product?&gt; GetByIdAsync(int id)
    {
        await using var conn = await dataSource.OpenConnectionAsync();
        return await conn.QuerySingleOrDefaultAsync&lt;Product&gt;(
            &quot;SELECT id, name, price, stock FROM products WHERE id = @Id&quot;,
            new { Id = id }
        );
    }

    public async Task&lt;int&gt; CreateAsync(Product product)
    {
        await using var conn = await dataSource.OpenConnectionAsync();
        return await conn.ExecuteScalarAsync&lt;int&gt;(
            &quot;&quot;&quot;
            INSERT INTO products (name, price, stock)
            VALUES (@Name, @Price, @Stock)
            RETURNING id
            &quot;&quot;&quot;,
            product
        );
    }

    public async Task&lt;bool&gt; UpdateAsync(Product product)
    {
        await using var conn = await dataSource.OpenConnectionAsync();
        var affected = await conn.ExecuteAsync(
            &quot;&quot;&quot;
            UPDATE products
            SET name = @Name, price = @Price, stock = @Stock
            WHERE id = @Id
            &quot;&quot;&quot;,
            product
        );
        return affected &gt; 0;
    }

    public async Task&lt;bool&gt; DeleteAsync(int id)
    {
        await using var conn = await dataSource.OpenConnectionAsync();
        var affected = await conn.ExecuteAsync(&quot;DELETE FROM products WHERE id = @Id&quot;, new { Id = id });
        return affected &gt; 0;
    }
}
</code></pre>
<h3 id="multi-mapping-joins">Multi-Mapping (Joins)</h3>
<pre><code class="language-csharp">public async Task&lt;IEnumerable&lt;Order&gt;&gt; GetOrdersWithCustomerAsync()
{
    await using var conn = await dataSource.OpenConnectionAsync();
    var sql = &quot;&quot;&quot;
        SELECT o.id, o.order_date, o.total,
               c.id, c.name, c.email
        FROM orders o
        INNER JOIN customers c ON o.customer_id = c.id
        ORDER BY o.order_date DESC
        &quot;&quot;&quot;;

    return await conn.QueryAsync&lt;Order, Customer, Order&gt;(
        sql,
        (order, customer) =&gt;
        {
            order.Customer = customer;
            return order;
        },
        splitOn: &quot;id&quot;  // Column where the second object starts
    );
}
</code></pre>
<h3 id="transactions-with-dapper">Transactions with Dapper</h3>
<pre><code class="language-csharp">public async Task TransferFundsAsync(int fromId, int toId, decimal amount)
{
    await using var conn = await dataSource.OpenConnectionAsync();
    await using var tx = await conn.BeginTransactionAsync();

    try
    {
        await conn.ExecuteAsync(
            &quot;UPDATE accounts SET balance = balance - @Amount WHERE id = @Id&quot;,
            new { Amount = amount, Id = fromId },
            transaction: tx
        );

        await conn.ExecuteAsync(
            &quot;UPDATE accounts SET balance = balance + @Amount WHERE id = @Id&quot;,
            new { Amount = amount, Id = toId },
            transaction: tx
        );

        await tx.CommitAsync();
    }
    catch
    {
        await tx.RollbackAsync();
        throw;
    }
}
</code></pre>
<h3 id="dapper-tips-for-postgresql">Dapper Tips for PostgreSQL</h3>
<p>PostgreSQL uses <code>snake_case</code> column names, but C# uses <code>PascalCase</code> properties. Configure Dapper to handle this automatically:</p>
<pre><code class="language-csharp">// In Program.cs or startup
Dapper.DefaultTypeMap.MatchNamesWithUnderscores = true;
</code></pre>
<p>Now <code>order_date</code> in PostgreSQL maps to <code>OrderDate</code> in C#.</p>
<p>For PostgreSQL arrays, Npgsql handles them natively:</p>
<pre><code class="language-csharp">var tags = new[] { &quot;electronics&quot;, &quot;sale&quot; };
var products = await conn.QueryAsync&lt;Product&gt;(
    &quot;SELECT * FROM products WHERE tags &amp;&amp; @Tags&quot;,
    new { Tags = tags }
);
</code></pre>
<p>For JSONB columns:</p>
<pre><code class="language-csharp">var metadata = JsonSerializer.Serialize(new { source = &quot;web&quot;, campaign = &quot;spring&quot; });
await conn.ExecuteAsync(
    &quot;INSERT INTO events (type, metadata) VALUES (@Type, @Metadata::jsonb)&quot;,
    new { Type = &quot;page_view&quot;, Metadata = metadata }
);
</code></pre>
<h2 id="part-8-npgsql-with-entity-framework-core">Part 8: Npgsql with Entity Framework Core</h2>
<h3 id="installation-2">Installation</h3>
<pre><code class="language-bash">dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL
</code></pre>
<h3 id="dbcontext-configuration">DbContext Configuration</h3>
<pre><code class="language-csharp">public class AppDbContext : DbContext
{
    public DbSet&lt;Product&gt; Products =&gt; Set&lt;Product&gt;();
    public DbSet&lt;Order&gt; Orders =&gt; Set&lt;Order&gt;();
    public DbSet&lt;Customer&gt; Customers =&gt; Set&lt;Customer&gt;();

    public AppDbContext(DbContextOptions&lt;AppDbContext&gt; options) : base(options) { }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        // Use snake_case naming convention for all tables and columns
        modelBuilder.HasDefaultSchema(&quot;public&quot;);

        modelBuilder.Entity&lt;Product&gt;(entity =&gt;
        {
            entity.ToTable(&quot;products&quot;);
            entity.HasKey(e =&gt; e.Id);
            entity.Property(e =&gt; e.Id).HasColumnName(&quot;id&quot;);
            entity.Property(e =&gt; e.Name).HasColumnName(&quot;name&quot;).HasMaxLength(200);
            entity.Property(e =&gt; e.Price).HasColumnName(&quot;price&quot;).HasColumnType(&quot;decimal(10,2)&quot;);
            entity.Property(e =&gt; e.Stock).HasColumnName(&quot;stock&quot;);
            entity.Property(e =&gt; e.Tags).HasColumnName(&quot;tags&quot;).HasColumnType(&quot;text[]&quot;);
            entity.Property(e =&gt; e.Metadata).HasColumnName(&quot;metadata&quot;).HasColumnType(&quot;jsonb&quot;);
            entity.HasIndex(e =&gt; e.Name);
        });
    }
}
</code></pre>
<h3 id="registration-in-asp.net">Registration in ASP.NET</h3>
<pre><code class="language-csharp">// Program.cs
builder.Services.AddDbContext&lt;AppDbContext&gt;(options =&gt;
    options.UseNpgsql(
        builder.Configuration.GetConnectionString(&quot;DefaultConnection&quot;),
        npgsqlOptions =&gt;
        {
            npgsqlOptions.UseNodaTime();
            npgsqlOptions.MapEnum&lt;OrderStatus&gt;(&quot;order_status&quot;);
            npgsqlOptions.SetPostgresVersion(18, 0);  // Enable PG18-specific SQL generation
            npgsqlOptions.EnableRetryOnFailure(
                maxRetryCount: 3,
                maxRetryDelay: TimeSpan.FromSeconds(5),
                errorCodesToAdd: null
            );
        }
    )
);
</code></pre>
<h3 id="migrations">Migrations</h3>
<pre><code class="language-bash"># Add a migration
dotnet ef migrations add InitialCreate

# Apply migrations
dotnet ef database update

# Generate a SQL script (for production deployments)
dotnet ef migrations script -o migrations.sql
</code></pre>
<h3 id="postgresql-specific-ef-core-features">PostgreSQL-Specific EF Core Features</h3>
<p><strong>JSONB Columns:</strong></p>
<pre><code class="language-csharp">public class Product
{
    public int Id { get; set; }
    public string Name { get; set; } = &quot;&quot;;
    public Dictionary&lt;string, string&gt; Metadata { get; set; } = new();
}

// In OnModelCreating
entity.Property(e =&gt; e.Metadata).HasColumnType(&quot;jsonb&quot;);

// Query JSONB
var products = await context.Products
    .Where(p =&gt; EF.Functions.JsonContains(p.Metadata, new { color = &quot;red&quot; }))
    .ToListAsync();
</code></pre>
<p><strong>Array Columns:</strong></p>
<pre><code class="language-csharp">public class Product
{
    public int Id { get; set; }
    public string[] Tags { get; set; } = [];
}

// Query arrays
var electronics = await context.Products
    .Where(p =&gt; p.Tags.Contains(&quot;electronics&quot;))
    .ToListAsync();
</code></pre>
<p><strong>Full-Text Search:</strong></p>
<pre><code class="language-csharp">var results = await context.Products
    .Where(p =&gt; EF.Functions.ToTsVector(&quot;english&quot;, p.Name + &quot; &quot; + p.Description)
        .Matches(EF.Functions.ToTsQuery(&quot;english&quot;, &quot;wireless &amp; keyboard&quot;)))
    .ToListAsync();
</code></pre>
<p><strong>PostgreSQL Enums:</strong></p>
<pre><code class="language-csharp">public enum OrderStatus { Pending, Processing, Shipped, Delivered, Cancelled }

// In OnModelCreating
modelBuilder.HasPostgresEnum&lt;OrderStatus&gt;();
modelBuilder.Entity&lt;Order&gt;().Property(e =&gt; e.Status).HasColumnType(&quot;order_status&quot;);

// In UseNpgsql configuration
npgsqlOptions.MapEnum&lt;OrderStatus&gt;(&quot;order_status&quot;);
</code></pre>
<h3 id="ef-core-performance-tips-for-postgresql">EF Core Performance Tips for PostgreSQL</h3>
<p>Use compiled queries for hot paths:</p>
<pre><code class="language-csharp">private static readonly Func&lt;AppDbContext, int, Task&lt;Product?&gt;&gt; GetProductById =
    EF.CompileAsyncQuery((AppDbContext ctx, int id) =&gt;
        ctx.Products.FirstOrDefault(p =&gt; p.Id == id));
</code></pre>
<p>Use <code>AsNoTracking()</code> for read-only queries:</p>
<pre><code class="language-csharp">var products = await context.Products.AsNoTracking().ToListAsync();
</code></pre>
<p>Use <code>ExecuteUpdateAsync</code> and <code>ExecuteDeleteAsync</code> for bulk operations (avoids loading entities):</p>
<pre><code class="language-csharp">await context.Products
    .Where(p =&gt; p.Stock == 0)
    .ExecuteUpdateAsync(s =&gt; s.SetProperty(p =&gt; p.Status, &quot;Discontinued&quot;));

await context.Products
    .Where(p =&gt; p.DeletedAt &lt; DateTime.UtcNow.AddYears(-1))
    .ExecuteDeleteAsync();
</code></pre>
<h2 id="part-9-transactions-and-isolation-levels">Part 9: Transactions and Isolation Levels</h2>
<h3 id="transaction-basics">Transaction Basics</h3>
<p>PostgreSQL supports full ACID transactions. Every statement in PostgreSQL runs inside a transaction. If you do not explicitly begin one, each statement is wrapped in an implicit transaction.</p>
<pre><code class="language-sql">-- Explicit transaction
BEGIN;
    UPDATE accounts SET balance = balance - 100 WHERE id = 1;
    UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;

-- Rollback on error
BEGIN;
    UPDATE accounts SET balance = balance - 100 WHERE id = 1;
    -- Oops, something went wrong
ROLLBACK;
</code></pre>
<h3 id="savepoints">Savepoints</h3>
<p>Savepoints allow partial rollback within a transaction:</p>
<pre><code class="language-sql">BEGIN;
    INSERT INTO orders (customer_id, total) VALUES (1, 99.99);
    SAVEPOINT before_items;

    INSERT INTO order_items (order_id, product_id, qty) VALUES (1, 100, 1);
    -- This fails due to a constraint violation
    ROLLBACK TO SAVEPOINT before_items;

    -- Try a different product
    INSERT INTO order_items (order_id, product_id, qty) VALUES (1, 200, 1);
COMMIT;
</code></pre>
<h3 id="isolation-levels">Isolation Levels</h3>
<p>PostgreSQL supports four isolation levels. Here is what each one actually does:</p>
<p><strong>Read Committed (Default):</strong> Each statement within a transaction sees a snapshot of the database as of the moment that statement began execution. If another transaction commits between two statements in your transaction, the second statement sees the committed changes. This is the default and is appropriate for most workloads.</p>
<p><strong>Repeatable Read:</strong> The transaction sees a snapshot of the database as of the moment the transaction began (not each statement). If another transaction commits changes to rows your transaction has read, and you try to update those same rows, PostgreSQL raises a serialization error and you must retry the transaction. This prevents non-repeatable reads and phantom reads.</p>
<p><strong>Serializable:</strong> The strictest level. PostgreSQL guarantees that the result of concurrent serializable transactions is equivalent to some serial (one-at-a-time) ordering. If PostgreSQL detects that no such ordering is possible, it raises a serialization error. This is the safest but most restrictive level.</p>
<p><strong>Read Uncommitted:</strong> In PostgreSQL, this is identical to Read Committed. PostgreSQL does not support dirty reads, ever. Setting <code>READ UNCOMMITTED</code> is accepted for SQL standard compliance but behaves as Read Committed.</p>
<pre><code class="language-sql">-- Set isolation level for a transaction
BEGIN ISOLATION LEVEL REPEATABLE READ;
    SELECT * FROM accounts WHERE id = 1;
    -- ... more operations ...
COMMIT;

-- Set default isolation level for a session
SET default_transaction_isolation = 'repeatable read';
</code></pre>
<p>In C# with Npgsql:</p>
<pre><code class="language-csharp">await using var conn = await dataSource.OpenConnectionAsync();
await using var tx = await conn.BeginTransactionAsync(IsolationLevel.RepeatableRead);

try
{
    // ... operations ...
    await tx.CommitAsync();
}
catch (PostgresException ex) when (ex.SqlState == &quot;40001&quot;) // serialization_failure
{
    await tx.RollbackAsync();
    // Retry the entire transaction
}
</code></pre>
<h3 id="the-nolock-question">The NOLOCK Question</h3>
<p>This deserves its own section because it is the single most common question from SQL Server developers.</p>
<p>In SQL Server, <code>NOLOCK</code> (or <code>READ UNCOMMITTED</code> isolation level) tells the engine to read data without acquiring shared locks. This prevents readers from blocking writers and vice versa. It is commonly used in SQL Server because the default Read Committed isolation level uses locking, which can cause severe blocking under concurrent load.</p>
<p><strong>You do not need NOLOCK in PostgreSQL. It does not exist, and you should not miss it.</strong></p>
<p>PostgreSQL uses MVCC for all isolation levels. Readers never block writers. Writers never block readers. The problem that <code>NOLOCK</code> solves in SQL Server simply does not exist in PostgreSQL. When you execute a <code>SELECT</code> in PostgreSQL, you read from a consistent snapshot without acquiring any locks that would block concurrent <code>INSERT</code>, <code>UPDATE</code>, or <code>DELETE</code> operations.</p>
<p>The only time you can experience blocking in PostgreSQL is when two transactions try to modify the same row concurrently. In that case, the second transaction waits for the first to commit or rollback. This is correct behavior — you would not want two concurrent updates to silently overwrite each other.</p>
<p><strong>Should you use <code>READ UNCOMMITTED</code> in development?</strong> It makes no difference in PostgreSQL. It behaves identically to <code>READ COMMITTED</code>.</p>
<p><strong>Should you use <code>READ UNCOMMITTED</code> in production?</strong> It makes no difference in PostgreSQL. But do not bother setting it. Just use the default <code>READ COMMITTED</code>.</p>
<p><strong>Bottom line: forget about <code>NOLOCK</code>. PostgreSQL solved this problem at the architecture level.</strong></p>
<h3 id="advisory-locks">Advisory Locks</h3>
<p>PostgreSQL provides advisory locks for application-level locking that does not correspond to any particular table or row:</p>
<pre><code class="language-sql">-- Session-level advisory lock (held until session ends or explicitly released)
SELECT pg_advisory_lock(12345);
-- ... do exclusive work ...
SELECT pg_advisory_unlock(12345);

-- Transaction-level advisory lock (released at end of transaction)
BEGIN;
SELECT pg_advisory_xact_lock(12345);
-- ... do exclusive work ...
COMMIT;  -- Lock is automatically released

-- Try to acquire without blocking
SELECT pg_try_advisory_lock(12345);  -- Returns true/false
</code></pre>
<p>In C# with Npgsql:</p>
<pre><code class="language-csharp">await using var conn = await dataSource.OpenConnectionAsync();
await using var tx = await conn.BeginTransactionAsync();

await using (var cmd = new NpgsqlCommand(&quot;SELECT pg_advisory_xact_lock(@key)&quot;, conn))
{
    cmd.Parameters.AddWithValue(&quot;key&quot;, 12345L);
    cmd.Transaction = tx;
    await cmd.ExecuteNonQueryAsync();
}

// ... perform exclusive work ...

await tx.CommitAsync(); // Advisory lock released
</code></pre>
<h2 id="part-10-networking-sessions-and-connection-pooling">Part 10: Networking, Sessions, and Connection Pooling</h2>
<h3 id="ssltls-configuration">SSL/TLS Configuration</h3>
<p>For production, always encrypt connections. In <code>postgresql.conf</code>:</p>
<pre><code class="language-ini">ssl = on
ssl_cert_file = '/path/to/server.crt'
ssl_key_file = '/path/to/server.key'
ssl_ca_file = '/path/to/ca.crt'

# PostgreSQL 18: Control TLS 1.3 cipher suites
ssl_tls13_ciphers = 'TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256'
</code></pre>
<p>In your .NET connection string:</p>
<pre><code>Host=db.example.com;Database=prod;Username=app;Password=secret;SSL Mode=VerifyFull;Root Certificate=/path/to/ca.crt
</code></pre>
<h3 id="connection-pooling">Connection Pooling</h3>
<p>Npgsql has built-in connection pooling enabled by default. Each unique connection string gets its own pool. Key parameters:</p>
<pre><code>Minimum Pool Size=0       # Pre-create this many connections
Maximum Pool Size=100     # Hard limit on concurrent connections
Connection Idle Lifetime=300  # Close idle connections after 5 minutes
Connection Pruning Interval=10  # Check for idle connections every 10 seconds
</code></pre>
<p>For high-concurrency applications, consider PgBouncer as an external connection pooler:</p>
<pre><code class="language-ini"># pgbouncer.ini
[databases]
myappdb = host=127.0.0.1 port=5432 dbname=myappdb

[pgbouncer]
listen_port = 6432
listen_addr = 0.0.0.0
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction    # transaction pooling is best for web apps
default_pool_size = 20
max_client_conn = 1000
</code></pre>
<p>With transaction-mode pooling, PgBouncer assigns a server connection to a client for the duration of a transaction, then returns it to the pool. This allows hundreds of application connections to share a much smaller number of PostgreSQL connections.</p>
<h3 id="monitoring-active-sessions">Monitoring Active Sessions</h3>
<pre><code class="language-sql">-- View all active connections
SELECT pid, usename, datname, client_addr, state, query, query_start
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;

-- Kill a specific session
SELECT pg_terminate_backend(12345);

-- Cancel the current query in a session (gentler than terminate)
SELECT pg_cancel_backend(12345);

-- View connection counts by state
SELECT state, count(*)
FROM pg_stat_activity
GROUP BY state;
</code></pre>
<h3 id="lock-monitoring">Lock Monitoring</h3>
<pre><code class="language-sql">-- View current locks
SELECT l.pid, l.locktype, l.mode, l.granted,
       a.usename, a.query, a.state
FROM pg_locks l
JOIN pg_stat_activity a ON l.pid = a.pid
WHERE NOT l.granted
ORDER BY l.pid;

-- Find blocking queries
SELECT blocked.pid AS blocked_pid,
       blocked.query AS blocked_query,
       blocking.pid AS blocking_pid,
       blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_locks bl ON blocked.pid = bl.pid AND NOT bl.granted
JOIN pg_locks gl ON bl.locktype = gl.locktype
    AND bl.relation = gl.relation
    AND bl.page = gl.page
    AND bl.tuple = gl.tuple
    AND gl.granted
JOIN pg_stat_activity blocking ON gl.pid = blocking.pid
WHERE blocked.pid != blocking.pid;
</code></pre>
<h2 id="part-11-debugging-and-performance-tuning">Part 11: Debugging and Performance Tuning</h2>
<h3 id="explain-and-explain-analyze">EXPLAIN and EXPLAIN ANALYZE</h3>
<p>This is the single most important debugging tool in PostgreSQL. <code>EXPLAIN</code> shows the query plan. <code>EXPLAIN ANALYZE</code> actually executes the query and shows real timing.</p>
<pre><code class="language-sql">-- Show the query plan (does not execute)
EXPLAIN SELECT * FROM products WHERE price &gt; 100;

-- Execute and show actual timing
EXPLAIN ANALYZE SELECT * FROM products WHERE price &gt; 100;

-- PostgreSQL 18: BUFFERS is included automatically in EXPLAIN ANALYZE
-- In older versions, add it explicitly:
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM products WHERE price &gt; 100;

-- Format as JSON (useful for visualization tools)
EXPLAIN (ANALYZE, FORMAT JSON) SELECT * FROM products WHERE price &gt; 100;
</code></pre>
<p>Key things to look for in query plans:</p>
<p><strong>Seq Scan:</strong> A full table scan. Fine for small tables, concerning for large ones. If you see a Seq Scan on a large table with a <code>WHERE</code> clause, you probably need an index.</p>
<p><strong>Index Scan:</strong> Uses a B-tree (or other) index. This is what you want for selective queries.</p>
<p><strong>Index Only Scan:</strong> Even better — the query is answered entirely from the index without accessing the table heap.</p>
<p><strong>Bitmap Index Scan + Bitmap Heap Scan:</strong> Used when the query matches many rows. The bitmap index scan builds a bitmap of matching pages, then the bitmap heap scan fetches those pages. Efficient for medium-selectivity queries.</p>
<p><strong>Nested Loop / Hash Join / Merge Join:</strong> Join strategies. Nested Loop is best for small result sets, Hash Join for larger ones, Merge Join when both inputs are sorted.</p>
<p><strong>Rows:</strong> Compare &quot;estimated&quot; vs &quot;actual&quot; rows. Large discrepancies mean your statistics are stale (run <code>ANALYZE</code>).</p>
<h3 id="statistics-and-analyze">Statistics and ANALYZE</h3>
<p>PostgreSQL's query planner relies on statistics about your data to choose efficient plans. These statistics are updated by the autovacuum daemon, but you can trigger an update manually:</p>
<pre><code class="language-sql">-- Update statistics for a specific table
ANALYZE products;

-- Update statistics for the entire database
ANALYZE;

-- Check when statistics were last updated
SELECT schemaname, relname, last_analyze, last_autoanalyze
FROM pg_stat_user_tables;
</code></pre>
<h3 id="pg_stat_statements">pg_stat_statements</h3>
<p>This extension tracks execution statistics for all SQL statements:</p>
<pre><code class="language-sql">-- Enable the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- View top queries by total time
SELECT query, calls, total_exec_time, mean_exec_time, rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

-- Reset statistics
SELECT pg_stat_statements_reset();
</code></pre>
<p>Add to <code>postgresql.conf</code>:</p>
<pre><code class="language-ini">shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
</code></pre>
<h3 id="auto_explain">auto_explain</h3>
<p>Automatically logs execution plans for slow queries:</p>
<pre><code class="language-ini"># postgresql.conf
shared_preload_libraries = 'pg_stat_statements,auto_explain'
auto_explain.log_min_duration = 1000    # Log plans for queries &gt; 1 second
auto_explain.log_analyze = on           # Include actual timing
auto_explain.log_buffers = on           # Include buffer usage
auto_explain.log_format = json          # JSON format for tooling
</code></pre>
<h3 id="indexing-best-practices">Indexing Best Practices</h3>
<pre><code class="language-sql">-- Standard B-tree index (most common)
CREATE INDEX idx_products_name ON products (name);

-- Partial index (only indexes rows matching a condition)
CREATE INDEX idx_active_products ON products (name) WHERE active = true;

-- Multi-column index (order matters for leftmost prefix matching)
CREATE INDEX idx_orders_customer_date ON orders (customer_id, order_date DESC);

-- GIN index for full-text search
CREATE INDEX idx_products_fts ON products
    USING GIN (to_tsvector('english', name || ' ' || description));

-- GIN index for JSONB containment queries
CREATE INDEX idx_products_metadata ON products USING GIN (metadata);

-- GIN index for array containment
CREATE INDEX idx_products_tags ON products USING GIN (tags);

-- BRIN index for naturally ordered data (timestamps, sequences)
-- Much smaller than B-tree, good for append-only tables
CREATE INDEX idx_events_created ON events USING BRIN (created_at);

-- Covering index (includes extra columns to enable index-only scans)
CREATE INDEX idx_products_name_covering ON products (name) INCLUDE (price, stock);

-- Concurrent index creation (does not lock the table)
CREATE INDEX CONCURRENTLY idx_products_sku ON products (sku);
</code></pre>
<h2 id="part-12-free-and-open-source-ides-and-gui-tools-on-linux">Part 12: Free and Open-Source IDEs and GUI Tools on Linux</h2>
<h3 id="pgadmin-4">pgAdmin 4</h3>
<p>pgAdmin is the official PostgreSQL administration tool, maintained by the PostgreSQL Global Development Group. It is the equivalent of SQL Server Management Studio, though it operates as a web application.</p>
<p><strong>Installation on Fedora:</strong></p>
<pre><code class="language-bash">sudo rpm -i https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/pgadmin4-fedora-repo-2-1.noarch.rpm
sudo dnf install pgadmin4-desktop  # Desktop mode
# Or
sudo dnf install pgadmin4-web      # Web server mode
</code></pre>
<p><strong>Installation on Ubuntu:</strong></p>
<pre><code class="language-bash">curl -fsS https://www.pgadmin.org/static/packages_pgadmin_org.pub | sudo gpg --dearmor -o /usr/share/keyrings/packages-pgadmin-org.gpg
echo &quot;deb [signed-by=/usr/share/keyrings/packages-pgadmin-org.gpg] https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt/$(lsb_release -cs) pgadmin4 main&quot; | sudo tee /etc/apt/sources.list.d/pgadmin4.list
sudo apt update &amp;&amp; sudo apt install pgadmin4-desktop
</code></pre>
<p><strong>Strengths:</strong> Comprehensive server administration, backup/restore wizards, role management, server monitoring dashboard, visual explain plan viewer, query history. It is free, official, and supports every PostgreSQL feature.</p>
<p><strong>Weaknesses:</strong> The interface is web-based (runs a local web server), which makes it noticeably slower than native applications. The UI is dense and complex. Query autocompletion is basic compared to other tools. Startup time is slow. It only supports PostgreSQL.</p>
<h3 id="dbeaver-community-edition">DBeaver Community Edition</h3>
<p>DBeaver is the most popular general-purpose open-source database GUI. The Community Edition is free and open-source under the Apache License 2.0. It supports over 100 database types through JDBC drivers.</p>
<p><strong>Installation:</strong></p>
<pre><code class="language-bash"># Flatpak (universal)
flatpak install flathub io.dbeaver.DBeaverCommunity

# Snap
sudo snap install dbeaver-ce

# Or download the .deb/.rpm from https://dbeaver.io/download/
</code></pre>
<p><strong>Strengths:</strong> Supports virtually every database you will ever encounter. SQL editor with intelligent autocompletion. ER diagram generation. Data export to CSV, JSON, XML, SQL, Excel, HTML. Visual query builder. Active community with frequent releases. It works with PostgreSQL, SQL Server, MySQL, SQLite, Oracle, MongoDB, and dozens more from a single application.</p>
<p><strong>Weaknesses:</strong> Java-based, so it can feel sluggish compared to native applications. The interface is feature-rich but busy. Initial schema loading can be slow on very large databases.</p>
<h3 id="beekeeper-studio">Beekeeper Studio</h3>
<p>Beekeeper Studio is a modern, cross-platform SQL editor focused on usability. The Community Edition is free and open-source under GPL v3.</p>
<p><strong>Installation:</strong></p>
<pre><code class="language-bash"># Flatpak
flatpak install flathub io.beekeeperstudio.Studio

# Snap
sudo snap install beekeeper-studio

# Or download from https://www.beekeeperstudio.io/
</code></pre>
<p><strong>Strengths:</strong> Clean, fast, modern interface. Excellent autocomplete. Tabbed query results. Native-feeling performance. Supports PostgreSQL, MySQL, SQLite, SQL Server, CockroachDB, and more. The simplest tool to pick up and use immediately.</p>
<p><strong>Weaknesses:</strong> Fewer advanced administration features compared to pgAdmin or DBeaver. The free Community Edition has some limitations compared to the paid Ultimate edition (though all PostgreSQL core features are free).</p>
<h3 id="dbgate">DbGate</h3>
<p>DbGate is a free, open-source database client that runs both as a desktop application and as a web application. It supports SQL and NoSQL databases.</p>
<p><strong>Installation:</strong></p>
<pre><code class="language-bash"># Snap
sudo snap install dbgate

# Or download from https://dbgate.org/
</code></pre>
<p><strong>Strengths:</strong> Works in the browser (no installation needed for the web version). Supports PostgreSQL, MySQL, SQL Server, MongoDB, SQLite, CockroachDB, and more. Data archiving and comparison features. Active development.</p>
<p><strong>Weaknesses:</strong> Smaller community than DBeaver or pgAdmin. Some rough edges in the UI.</p>
<h3 id="pgcli-terminal">pgcli (Terminal)</h3>
<p>Already mentioned above, but worth emphasizing: pgcli is the best terminal-based PostgreSQL client. It provides intelligent autocompletion, syntax highlighting, and multi-line editing.</p>
<pre><code class="language-bash">pip install pgcli
# or
sudo dnf install pgcli
</code></pre>
<h3 id="visual-studio-code-with-postgresql-extension">Visual Studio Code with PostgreSQL Extension</h3>
<p>Microsoft released an official PostgreSQL extension for VS Code. It provides an object explorer, query editor with IntelliSense, schema visualization, and query history. Since many .NET developers already live in VS Code, this is a natural choice.</p>
<p><strong>Installation:</strong>
Search for &quot;PostgreSQL&quot; in the VS Code extensions marketplace and install the one by Microsoft.</p>
<h3 id="azure-data-studio">Azure Data Studio</h3>
<p>Azure Data Studio (formerly SQL Operations Studio) is Microsoft's cross-platform database tool. While it originated as a SQL Server tool, it supports PostgreSQL through an extension. It is free and open-source.</p>
<pre><code class="language-bash"># Download from https://learn.microsoft.com/en-us/azure-data-studio/download
# Or install via Snap/Flatpak
</code></pre>
<h3 id="adminer">Adminer</h3>
<p>Adminer is a single PHP file that provides a complete database management interface. If you have PHP installed, you can deploy it in seconds. It supports PostgreSQL, MySQL, SQLite, SQL Server, and Oracle.</p>
<pre><code class="language-bash"># Download the single file
wget https://www.adminer.org/latest.php -O adminer.php
php -S localhost:8080 adminer.php
# Open http://localhost:8080 in your browser
</code></pre>
<h3 id="comparison-summary">Comparison Summary</h3>
<p>For pure PostgreSQL administration, use <strong>pgAdmin</strong>. It has every feature and is maintained by the PostgreSQL team. For a general-purpose GUI that handles multiple databases beautifully, use <strong>DBeaver Community</strong>. For a fast, clean, modern developer experience, use <strong>Beekeeper Studio</strong>. For terminal work, use <strong>pgcli</strong>. For integration with your editor, use the <strong>VS Code PostgreSQL extension</strong>.</p>
<p>All of these tools are completely free and open-source. None require payment for any feature relevant to PostgreSQL development work on Linux.</p>
<h2 id="part-13-backup-and-restore">Part 13: Backup and Restore</h2>
<h3 id="pg_dump-and-pg_restore">pg_dump and pg_restore</h3>
<pre><code class="language-bash"># Dump a single database to a custom-format file (recommended)
pg_dump -h localhost -U myapp -d myappdb -Fc -f backup.dump

# Dump to plain SQL
pg_dump -h localhost -U myapp -d myappdb -f backup.sql

# Dump only the schema (no data)
pg_dump -h localhost -U myapp -d myappdb --schema-only -f schema.sql

# Dump only the data (no schema)
pg_dump -h localhost -U myapp -d myappdb --data-only -f data.sql

# Restore from custom format
pg_restore -h localhost -U myapp -d myappdb -c backup.dump

# Restore from plain SQL
psql -h localhost -U myapp -d myappdb -f backup.sql

# Dump all databases
pg_dumpall -h localhost -U postgres -f all-databases.sql
</code></pre>
<h3 id="postgresql-17-incremental-backups">PostgreSQL 17: Incremental Backups</h3>
<pre><code class="language-bash"># Enable WAL summarization
ALTER SYSTEM SET summarize_wal = on;
SELECT pg_reload_conf();

# Take a full base backup
pg_basebackup -D /backups/full -Ft -z -P

# Take an incremental backup (only changes since last backup)
pg_basebackup -D /backups/incr1 --incremental /backups/full/backup_manifest -Ft -z -P

# Combine full + incremental for restore
pg_combinebackup /backups/full /backups/incr1 -o /backups/combined
</code></pre>
<h3 id="automated-backups-with-cron">Automated Backups with Cron</h3>
<pre><code class="language-bash"># Daily backup at 2 AM, keep 7 days
# Add to crontab: crontab -e
0 2 * * * pg_dump -h localhost -U myapp -d myappdb -Fc -f /backups/myappdb-$(date +\%Y\%m\%d).dump &amp;&amp; find /backups -name &quot;myappdb-*.dump&quot; -mtime +7 -delete
</code></pre>
<h2 id="part-14-common-sql-patterns-for.net-developers">Part 14: Common SQL Patterns for .NET Developers</h2>
<h3 id="pagination">Pagination</h3>
<pre><code class="language-sql">-- Offset-based (simple but slow for large offsets)
SELECT * FROM products ORDER BY id LIMIT 20 OFFSET 40;

-- Cursor-based (efficient for large datasets)
SELECT * FROM products WHERE id &gt; @LastId ORDER BY id LIMIT 20;
</code></pre>
<h3 id="upsert-insert-on-conflict">Upsert (INSERT ON CONFLICT)</h3>
<pre><code class="language-sql">INSERT INTO products (sku, name, price, stock)
VALUES ('WIDGET-001', 'Widget', 9.99, 100)
ON CONFLICT (sku)
DO UPDATE SET
    name = EXCLUDED.name,
    price = EXCLUDED.price,
    stock = EXCLUDED.stock;
</code></pre>
<h3 id="common-table-expressions-ctes">Common Table Expressions (CTEs)</h3>
<pre><code class="language-sql">-- Recursive CTE for hierarchical data (e.g., categories)
WITH RECURSIVE category_tree AS (
    -- Base case: root categories
    SELECT id, name, parent_id, 0 AS depth
    FROM categories
    WHERE parent_id IS NULL

    UNION ALL

    -- Recursive case: children
    SELECT c.id, c.name, c.parent_id, ct.depth + 1
    FROM categories c
    INNER JOIN category_tree ct ON c.parent_id = ct.id
)
SELECT * FROM category_tree ORDER BY depth, name;
</code></pre>
<h3 id="window-functions">Window Functions</h3>
<pre><code class="language-sql">-- Rank products by price within each category
SELECT name, category, price,
       RANK() OVER (PARTITION BY category ORDER BY price DESC) AS price_rank,
       AVG(price) OVER (PARTITION BY category) AS avg_category_price
FROM products;

-- Running total
SELECT order_date, total,
       SUM(total) OVER (ORDER BY order_date) AS running_total
FROM orders;
</code></pre>
<h3 id="generate_series">GENERATE_SERIES</h3>
<pre><code class="language-sql">-- Generate a date series (useful for reports with no gaps)
SELECT d::date AS day,
       COALESCE(SUM(o.total), 0) AS daily_total
FROM generate_series('2026-01-01'::date, '2026-01-31'::date, '1 day') AS d
LEFT JOIN orders o ON o.order_date::date = d::date
GROUP BY d::date
ORDER BY d::date;
</code></pre>
<h3 id="full-text-search">Full-Text Search</h3>
<pre><code class="language-sql">-- Add a tsvector column (or use a generated column)
ALTER TABLE products ADD COLUMN search_vector tsvector
    GENERATED ALWAYS AS (to_tsvector('english', name || ' ' || coalesce(description, ''))) STORED;

-- Create a GIN index
CREATE INDEX idx_products_search ON products USING GIN (search_vector);

-- Search
SELECT name, ts_rank(search_vector, query) AS rank
FROM products, to_tsquery('english', 'wireless &amp; keyboard') AS query
WHERE search_vector @@ query
ORDER BY rank DESC;
</code></pre>
<h2 id="part-15-opentelemetry-and-observability">Part 15: OpenTelemetry and Observability</h2>
<p>Npgsql has built-in OpenTelemetry support:</p>
<pre><code class="language-bash">dotnet add package Npgsql.OpenTelemetry
</code></pre>
<pre><code class="language-csharp">// Program.cs
builder.Services.AddNpgsqlDataSource(
    connectionString,
    dataSourceBuilder =&gt;
    {
        dataSourceBuilder.ConfigureTracing(tracing =&gt;
        {
            tracing.ConfigureCommandFilter(cmd =&gt;
                !cmd.CommandText.StartsWith(&quot;SELECT 1&quot;)); // Filter out health checks
        });
    }
);

builder.Services.AddOpenTelemetry()
    .WithTracing(tracing =&gt;
    {
        tracing.AddNpgsql();
        tracing.AddAspNetCoreInstrumentation();
        tracing.AddOtlpExporter();
    });
</code></pre>
<p>This emits OpenTelemetry spans for every database command, including the SQL text (sanitized by default), duration, and error information. You can view these in Jaeger, Zipkin, Grafana Tempo, or any OpenTelemetry-compatible backend.</p>
<p>For metrics, Npgsql emits connection pool statistics (active connections, idle connections, pending requests) as OpenTelemetry metrics automatically when you configure the tracing above.</p>
<h2 id="part-16-security-best-practices">Part 16: Security Best Practices</h2>
<p>Always use SCRAM-SHA-256 authentication, never MD5 (deprecated in PostgreSQL 18). Always use SSL in production. Never use the <code>postgres</code> superuser for application connections; create dedicated users with minimal privileges.</p>
<pre><code class="language-sql">-- Create a read-only user
CREATE ROLE readonly_user WITH LOGIN PASSWORD 'secure-password';
GRANT CONNECT ON DATABASE myappdb TO readonly_user;
GRANT USAGE ON SCHEMA public TO readonly_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly_user;

-- Create an application user with read/write but no DDL
CREATE ROLE app_user WITH LOGIN PASSWORD 'secure-password';
GRANT CONNECT ON DATABASE myappdb TO app_user;
GRANT USAGE ON SCHEMA public TO app_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO app_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE, SELECT ON SEQUENCES TO app_user;
</code></pre>
<p>Use row-level security for multi-tenant applications:</p>
<pre><code class="language-sql">ALTER TABLE tenant_data ENABLE ROW LEVEL SECURITY;

CREATE POLICY tenant_isolation ON tenant_data
    USING (tenant_id = current_setting('app.current_tenant')::int);

-- In your application, set the tenant context per request:
-- SET app.current_tenant = '42';
</code></pre>
<h2 id="part-17-migrating-from-sql-server-mental-models">Part 17: Migrating from SQL Server Mental Models</h2>
<p>Here is a quick reference for translating SQL Server concepts to PostgreSQL:</p>
<p>SQL Server's <code>IDENTITY</code> becomes PostgreSQL's <code>SERIAL</code> or <code>GENERATED ALWAYS AS IDENTITY</code>. SQL Server's <code>NVARCHAR(MAX)</code> becomes PostgreSQL's <code>TEXT</code> (there is no performance difference between <code>VARCHAR(n)</code> and <code>TEXT</code> in PostgreSQL; <code>TEXT</code> is preferred). SQL Server's <code>DATETIME2</code> becomes PostgreSQL's <code>TIMESTAMPTZ</code> (always use the timezone-aware variant). SQL Server's <code>BIT</code> becomes PostgreSQL's <code>BOOLEAN</code>. SQL Server's <code>UNIQUEIDENTIFIER</code> becomes PostgreSQL's <code>UUID</code>. SQL Server's <code>NVARCHAR(n)</code> becomes PostgreSQL's <code>VARCHAR(n)</code> or <code>TEXT</code> (PostgreSQL stores all text as UTF-8 by default; there is no separate <code>N</code> prefix). SQL Server's <code>TOP n</code> becomes PostgreSQL's <code>LIMIT n</code>. SQL Server's <code>ISNULL()</code> becomes PostgreSQL's <code>COALESCE()</code>. SQL Server's <code>GETDATE()</code> becomes PostgreSQL's <code>now()</code> or <code>CURRENT_TIMESTAMP</code>. SQL Server's square-bracket quoting <code>[column]</code> becomes PostgreSQL's double-quote quoting <code>&quot;column&quot;</code>, but you should use <code>snake_case</code> and avoid quoting entirely. SQL Server's <code>@@IDENTITY</code> / <code>SCOPE_IDENTITY()</code> becomes PostgreSQL's <code>RETURNING id</code> clause. SQL Server's stored procedures written in T-SQL become PostgreSQL functions or procedures written in PL/pgSQL, though many .NET developers prefer to keep logic in the application layer.</p>
<h2 id="conclusion">Conclusion</h2>
<p>PostgreSQL is a world-class database that is completely free, fully featured, and exceptionally well-supported in the .NET ecosystem through Npgsql. Whether you are building a small side project or an enterprise application, PostgreSQL provides everything you need: MVCC concurrency that eliminates the locking headaches of SQL Server, a rich type system with native JSON, arrays, and full-text search support, excellent performance through the new AIO subsystem in PostgreSQL 18, and first-class .NET integration through Npgsql with both Dapper and Entity Framework Core.</p>
<p>The tooling on Linux is mature and diverse. pgAdmin gives you full administration capabilities, DBeaver gives you a universal GUI, Beekeeper Studio gives you a beautiful modern interface, pgcli gives you a superb terminal experience, and VS Code gives you database access without leaving your editor. All of it is free. All of it is open source.</p>
<p>The configuration is straightforward once you understand the two key files: <code>postgresql.conf</code> for server behavior and <code>pg_hba.conf</code> for authentication. Docker and Podman make it trivially easy to spin up PostgreSQL for development. And with the connection pooling built into Npgsql (or external via PgBouncer), your ASP.NET applications can handle massive concurrent loads efficiently.</p>
<p>If you are coming from SQL Server, the transition is smoother than you might expect. The SQL is standard. The concepts are familiar. The main adjustments are embracing MVCC (and forgetting about <code>NOLOCK</code>), adopting <code>snake_case</code> naming conventions, and learning the PostgreSQL-specific extensions like JSONB, arrays, and full-text search that do not have direct SQL Server equivalents.</p>
<p>Welcome to PostgreSQL. Your database just became free forever.</p>
]]></content:encoded>
      <category>postgresql</category>
      <category>npgsql</category>
      <category>dotnet</category>
      <category>dapper</category>
      <category>efcore</category>
      <category>linux</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>SQL Server: The Complete Guide for .NET Developers — From SSMS to T-SQL to Production Best Practices</title>
      <link>https://observermagazine.github.io/blog/sql-server-complete-guide</link>
      <description>Everything a .NET/C#/ASP.NET developer needs to know about SQL Server — covering versions 2016 through 2025, SSMS 21 and 22, SQL Profiler, sqlcmd, T-SQL, transactions, locking, networking, sessions, debugging, and production best practices.</description>
      <pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/sql-server-complete-guide</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>SQL Server is the database engine that powers a massive share of the .NET ecosystem. Whether you are building an ASP.NET Core Web API backed by Entity Framework Core, a Blazor application hitting a data layer, or a legacy Web Forms app with hand-crafted stored procedures, SQL Server is likely somewhere in your stack. Despite its ubiquity, many .NET developers treat the database as a black box — they write LINQ queries, hope EF Core generates something reasonable, and call it a day.</p>
<p>This guide exists to change that. We will walk through everything a practicing .NET developer should know about SQL Server: the evolution of features across versions 2016 through 2025, how to use SQL Server Management Studio (SSMS) like a power user, how to work from the terminal with sqlcmd, the fundamentals and advanced corners of T-SQL, how transactions and locking actually work, networking and session management, debugging production issues, and the best practices that separate a smooth-running production system from a 3 AM pager alert.</p>
<p>This is a long article. Bookmark it and come back. Let us begin.</p>
<hr />
<h2 id="part-1-sql-server-versions-what-shipped-and-why-it-matters">Part 1: SQL Server Versions — What Shipped and Why It Matters</h2>
<p>Understanding which features landed in which version is critical. Your production server might be running SQL Server 2019 while your development machine has 2022. Knowing the boundaries prevents you from writing code that works locally and fails in staging.</p>
<h3 id="sql-server-2016-version-13.x">SQL Server 2016 (Version 13.x)</h3>
<p>SQL Server 2016 was a watershed release. It introduced temporal tables — system-versioned tables that automatically track the full history of data changes, letting you query data as it existed at any point in the past using the <code>FOR SYSTEM_TIME</code> clause. It brought row-level security, allowing you to define predicate functions that filter rows based on the identity of the executing user, directly within the database engine rather than in application code. Dynamic data masking arrived, enabling you to obscure sensitive columns (like email addresses or credit card numbers) so that unprivileged users see masked values while authorized users see the real data.</p>
<p>The Always Encrypted feature debuted in 2016, providing client-side encryption of sensitive columns such that the database engine itself never sees the plaintext values — the encryption and decryption happen entirely in the client driver, which is critical for compliance scenarios.</p>
<p>On the performance front, 2016 introduced the Query Store — a built-in flight recorder for query plans and runtime statistics. The Query Store captures the execution plan history for every query, along with resource consumption metrics, making it straightforward to identify plan regressions and force a known-good plan without touching application code. This single feature changed how DBAs and developers troubleshoot performance problems.</p>
<p>JSON support also landed in 2016 with <code>FOR JSON</code>, <code>OPENJSON</code>, <code>JSON_VALUE</code>, and <code>JSON_QUERY</code> functions, though at this stage JSON was stored as plain <code>NVARCHAR</code> with no dedicated data type. R Services (later renamed Machine Learning Services) allowed you to execute R scripts directly inside the database engine.</p>
<h3 id="sql-server-2017-version-14.x">SQL Server 2017 (Version 14.x)</h3>
<p>The headline of SQL Server 2017 was Linux support. For the first time, SQL Server ran natively on Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server, and was also available as a Docker container. This was a seismic shift — it meant you could run SQL Server in your CI pipeline on a Linux agent, deploy it on Kubernetes, or use it on a Mac for development via Docker.</p>
<p>Adaptive query processing appeared, where the query optimizer could adjust join strategies (for example, switching from a nested loop to a hash join) during execution based on actual row counts, and memory grant feedback allowed the engine to learn from previous executions and adjust memory allocations automatically. Graph database support was introduced with the <code>NODE</code> and <code>EDGE</code> table types, enabling you to model and query complex relationship-heavy data (think social networks, recommendation engines, or fraud detection graphs) using the <code>MATCH</code> pattern in T-SQL. Python support was added to Machine Learning Services alongside R, and automatic database tuning debuted — the engine could detect plan regressions and automatically force the last known good execution plan.</p>
<h3 id="sql-server-2019-version-15.x">SQL Server 2019 (Version 15.x)</h3>
<p>SQL Server 2019 brought Intelligent Query Processing (IQP) to the forefront with a suite of features: table variable deferred compilation (so the optimizer no longer assumed one row for table variables), batch mode on rowstore (previously batch mode was only available for columnstore indexes), and scalar UDF inlining (the optimizer could inline simple scalar functions directly into the calling query's plan, eliminating the per-row function call overhead that made scalar UDFs so notoriously slow).</p>
<p>Big Data Clusters were introduced (and later deprecated in SQL Server 2025, so do not invest new work here). Accelerated database recovery (ADR) fundamentally changed the crash recovery model by using a persistent version store, making recovery time proportional to the longest uncommitted transaction rather than the amount of work in the log. This was a game-changer for databases with long-running transactions.</p>
<p>UTF-8 collation support arrived, allowing you to use <code>VARCHAR</code> columns with UTF-8 encoding instead of needing <code>NVARCHAR</code> for international text, which could significantly reduce storage for data that is mostly ASCII but needs occasional Unicode support. The <code>OPTIMIZE_FOR_SEQUENTIAL_KEY</code> index option addressed the last-page insert contention problem common in tables with identity columns under high-concurrency inserts.</p>
<h3 id="sql-server-2022-version-16.x">SQL Server 2022 (Version 16.x)</h3>
<p>SQL Server 2022 was a major step toward cloud integration and performance modernization. The Intelligent Query Processing suite expanded further with Parameter Sensitivity Plan Optimization (PSP optimization) — the optimizer could now create multiple cached plans for the same parameterized query if it detected that different parameter values led to fundamentally different optimal plans. This directly attacked the classic parameter sniffing problem that has plagued SQL Server developers for decades. You no longer had to pepper your stored procedures with <code>OPTION (RECOMPILE)</code> or use the <code>OPTIMIZE FOR</code> hint as a band-aid.</p>
<p>Degree of Parallelism (DOP) feedback allowed the engine to learn the ideal degree of parallelism for a query over repeated executions and adjust it automatically, rather than relying on a server-wide <code>MAXDOP</code> setting. Cardinality estimation (CE) feedback let the optimizer correct persistent misestimates over time.</p>
<p>Ledger tables were introduced for tamper-evident data — the database maintains a cryptographic hash chain of all changes, allowing you to prove that data has not been modified outside of normal transactions. This is valuable for auditing and regulatory compliance without the complexity of a full blockchain.</p>
<p>Contained Availability Groups made it possible to include instance-level objects (logins, SQL Agent jobs, linked servers) inside the AG, so failover truly moved everything you needed. The <code>LEAST</code> and <code>GREATEST</code> functions finally arrived (yes, it took until 2022 to get these built-in). The <code>DATETRUNC</code> function, <code>GENERATE_SERIES</code>, <code>STRING_SPLIT</code> with an ordinal column, and <code>WINDOW</code> clause for cleaner window function syntax all simplified common T-SQL patterns.</p>
<p>On the connectivity side, SQL Server 2022 introduced TDS 8.0 with support for TLS 1.3 and strict encryption mode, where the connection is encrypted before the login handshake even begins.</p>
<p>The Query Store was enabled by default on new databases in SQL Server 2022, and Query Store hints became generally available — you could apply query hints (like <code>MAXDOP</code>, <code>RECOMPILE</code>, or <code>USE HINT</code>) to specific queries identified by their Query Store query_id, without modifying application code.</p>
<h3 id="sql-server-2025-version-17.x">SQL Server 2025 (Version 17.x)</h3>
<p>SQL Server 2025 reached general availability on November 18, 2025 at Microsoft Ignite. It is the most AI-focused release in SQL Server history, while simultaneously delivering substantial improvements for traditional workloads.</p>
<p>The native JSON data type is the headline developer feature. After a decade of storing JSON as <code>NVARCHAR</code>, SQL Server 2025 provides a proper <code>JSON</code> column type with optimized storage and native indexing. This means JSON data is stored in an efficient binary format internally, queries against JSON properties are faster, and you get schema validation at the engine level.</p>
<p>The native vector data type and built-in vector search bring AI and machine learning capabilities directly into the database engine. You can store embeddings (arrays of floating-point numbers produced by ML models) in <code>VECTOR</code> columns and perform similarity searches using distance functions like cosine similarity, all in T-SQL. For .NET developers building retrieval-augmented generation (RAG) applications, this eliminates the need for a separate vector database.</p>
<p>T-SQL enhancements are substantial: <code>REGEX</code> functions for pattern matching (you no longer need CLR assemblies or <code>LIKE</code> with wildcards for complex patterns), fuzzy string matching functions, and the ability to call external REST endpoints directly from T-SQL using <code>sp_invoke_external_rest_endpoint</code>. You can generate text embeddings and chunks directly in T-SQL, which is remarkable for in-database AI pipelines.</p>
<p>Optimized locking is a major engine improvement. SQL Server 2025 reworks the locking subsystem to reduce lock memory consumption and contention, which is particularly beneficial for high-concurrency OLTP workloads. Transaction ID (TID) locking replaces row-level locking after qualification, reducing the number of locks held and the potential for deadlocks.</p>
<p>Optional Parameter Plan Optimization (OPPO) is the evolution of PSP optimization from 2022, allowing the query optimizer to generate multiple plans for parameterized queries with even finer granularity.</p>
<p>The <code>abort_query_execution</code> hint lets DBAs block known-problematic queries from executing at all, which is a powerful safety net for production systems where a single bad query can bring down the server.</p>
<p>SQL Server Reporting Services (SSRS) is discontinued starting with 2025 — all on-premises reporting consolidation happens under Power BI Report Server (PBIRS).</p>
<p>On the platform side, SQL Server 2025 on Linux adds TLS 1.3, custom password policies, and signed container images. Platform support extends to RHEL 10 and Ubuntu 24.04. The Express edition maximum database size jumps to 50 GB (up from 10 GB), and the Express Advanced edition is consolidated into the base Express edition with all features included.</p>
<p>Standard edition capacity limits increase to 4 sockets or 32 cores, which is meaningful for mid-tier workloads that previously required Enterprise licensing.</p>
<p>Change event streaming allows you to stream changes directly from the transaction log to Azure Event Hubs, providing a lower-overhead alternative to Change Data Capture (CDC) for real-time event-driven architectures.</p>
<hr />
<h2 id="part-2-sql-server-management-studio-ssms-mastering-the-tool">Part 2: SQL Server Management Studio (SSMS) — Mastering the Tool</h2>
<p>SSMS is where most .NET developers spend their SQL Server time. As of March 2026, there are two current major versions: SSMS 21 and SSMS 22.</p>
<h3 id="ssms-21-and-ssms-22-overview">SSMS 21 and SSMS 22 Overview</h3>
<p>Both SSMS 21 and 22 are built on the Visual Studio 2022 shell, making them 64-bit applications. This is a significant departure from SSMS 18, 19, and 20, which used the Visual Studio 2015 shell and were 32-bit. The practical impact is that SSMS 21/22 can handle much larger result sets and more complex execution plans without running out of memory.</p>
<p>SSMS is completely free and standalone. It does not require a SQL Server license, and it is not tied to any specific SQL Server edition or version. You can manage SQL Server 2012 through 2025, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics from a single SSMS installation.</p>
<p>SSMS 22 is the latest as of March 2026, with version 22.4.1 released on March 18, 2026. It introduces initial ARM64 support, GitHub Copilot integration (preview), a rebuilt connection dialog, and native support for SQL Server 2025 features like the vector data type.</p>
<h3 id="installation">Installation</h3>
<p>Install SSMS using the Visual Studio Installer bootstrapper. Download the installer from the official Microsoft download page. The installer is a small bootstrapper that downloads the actual components. You do not need to install full Visual Studio — the installer handles the shell components automatically.</p>
<p>You can also install via the command line:</p>
<pre><code>winget install Microsoft.SQLServerManagementStudio
</code></pre>
<p>For SSMS 21 specifically:</p>
<pre><code>winget install Microsoft.SQLServerManagementStudio.21
</code></pre>
<p>SSMS 21 and 22 can coexist with SSMS 20 or earlier. You do not need to uninstall your old version first. Migrate your settings when you are comfortable.</p>
<h3 id="the-connection-dialog">The Connection Dialog</h3>
<p>When you connect to SQL Server, pay attention to the encryption settings. SSMS 22 defaults to mandatory encryption (<code>-Nm</code> behavior), which is a breaking change from earlier versions. If you are connecting to a development SQL Server that uses a self-signed certificate, you may need to check &quot;Trust server certificate&quot; or the connection will fail with a certificate validation error. In production, you should use a proper certificate from a trusted CA and set the encryption mode to Strict (available for SQL Server 2022 and later), which uses TDS 8.0 and encrypts before the TLS handshake.</p>
<p>The authentication dropdown now includes Microsoft Entra (formerly Azure Active Directory) options: MFA, Interactive, Managed Identity, Service Principal, and Default. If your organization uses Entra ID for SQL Database or Managed Instance, these are the correct authentication methods.</p>
<h3 id="ssms-features-every-developer-should-use">SSMS Features Every Developer Should Use</h3>
<p><strong>Object Explorer</strong> is the tree view on the left. Right-clicking on any object gives you context-specific options. Right-click a table and choose &quot;Script Table as &gt; SELECT To &gt; New Query Window&quot; to generate a SELECT statement. Right-click a stored procedure and choose &quot;Modify&quot; to open its definition for editing. Right-click a database and go to &quot;Reports &gt; Standard Reports&quot; for built-in reports on disk usage, index physical statistics, top queries by total CPU time, and more.</p>
<p><strong>Activity Monitor</strong> (right-click the server name in Object Explorer and select &quot;Activity Monitor&quot;) shows real-time data about processes, resource waits, data file I/O, and expensive queries. This is your first stop when something is slow. The &quot;Recent Expensive Queries&quot; pane shows the top queries by CPU, duration, physical reads, and logical writes. Click any query to see its execution plan.</p>
<p><strong>Execution Plans</strong> are the single most important diagnostic tool. Before running a query, press <code>Ctrl+L</code> to display the estimated execution plan without actually executing the query. Press <code>Ctrl+M</code> to enable &quot;Include Actual Execution Plan,&quot; then execute the query with <code>F5</code> — the actual plan appears in a new tab showing real row counts, actual vs. estimated rows, memory grants, and other runtime statistics.</p>
<p>When reading an execution plan, read from right to left and top to bottom. The width of the arrows between operators indicates the relative number of rows flowing through. Look for large discrepancies between estimated and actual rows — these indicate stale statistics or cardinality estimation problems. Look for Key Lookups (a nonclustered index found the rows but needed to go back to the clustered index to fetch additional columns), which often suggest adding included columns to the nonclustered index. Look for Table Scans and Clustered Index Scans on large tables, which may indicate missing indexes or non-sargable WHERE clauses.</p>
<p>Right-click any operator in the plan to see its properties, including the output list (columns it produces), predicates, memory fractions, estimated CPU and I/O cost, and the actual number of rows vs. estimated. Hover over the thick arrows to see the number of rows.</p>
<p><strong>Include Live Query Statistics</strong> (<code>Ctrl+Alt+L</code> before executing) shows the execution plan with real-time progress animation — you can literally watch rows flow through the operators as the query runs. This is invaluable for long-running queries because you can see exactly where the query is spending time without waiting for it to finish.</p>
<p><strong>Query Store UI</strong> is accessed by expanding a database in Object Explorer, then expanding &quot;Query Store.&quot; Here you find built-in reports: Top Resource Consuming Queries, Regressed Queries, Overall Resource Consumption, and Forced Plans. The Regressed Queries view is particularly useful — it shows queries whose performance has degraded compared to historical execution, and lets you force a previous, better-performing plan with a single click. This is one of the most powerful features in SQL Server for application developers who deploy code changes and notice performance degradation.</p>
<p><strong>Template Explorer</strong> (<code>Ctrl+Alt+T</code>) provides pre-built T-SQL templates for common tasks like creating indexes, adding constraints, or configuring replication. Each template has placeholder parameters that SSMS highlights for you to fill in.</p>
<p><strong>SQLCMD Mode</strong> in SSMS lets you use sqlcmd-specific commands directly in the query editor. Enable it from the Query menu. In SQLCMD mode, you can use <code>:CONNECT</code> to connect to a different server mid-script, <code>:r</code> to include external script files, and scripting variables with <code>$(VariableName)</code> syntax. This is useful for deployment scripts that target multiple servers.</p>
<p><strong>Multi-Server Queries</strong>: You can register multiple servers in the &quot;Registered Servers&quot; window (<code>Ctrl+Alt+G</code>), create server groups, and then execute a query simultaneously against all servers in a group. The results come back with an additional column showing which server produced each row.</p>
<p><strong>Keyboard Shortcuts</strong>: <code>F5</code> executes the selected text (or the entire batch if nothing is selected). <code>Ctrl+E</code> also executes. <code>Ctrl+L</code> shows the estimated plan. <code>Ctrl+K, Ctrl+C</code> comments the selection, <code>Ctrl+K, Ctrl+U</code> uncomments. <code>Ctrl+Shift+U</code> uppercases the selection, <code>Ctrl+Shift+L</code> lowercases. <code>Alt+F1</code> with a table name selected runs <code>sp_help</code> on it. <code>Ctrl+R</code> toggles the results pane. <code>Ctrl+T</code> switches results to text mode (which is often more readable for narrow result sets). <code>Ctrl+D</code> switches results to grid mode.</p>
<p><strong>Snippets</strong>: SSMS supports code snippets. Press <code>Ctrl+K, Ctrl+X</code> to insert a snippet. You can create custom snippets for your frequently-used T-SQL patterns by adding XML files to the snippets directory.</p>
<p><strong>Search</strong>: SSMS 21 and 22 include a search bar at the top (<code>Ctrl+Q</code>) with two modes — Feature Search (find SSMS settings and commands) and Code Search (find strings in files, folders, or repositories). Feature Search is particularly handy when you cannot remember where a setting lives — just type &quot;line numbers&quot; and it shows you the option to toggle line numbers on or off.</p>
<p><strong>Tabs</strong>: SSMS 21/22 supports multi-row tabs and configurable tab positions (top, left, or right). Right-click on a tab strip and choose &quot;Set Tab Layout&quot; to change this. With dozens of query windows open, multi-row tabs are a sanity saver.</p>
<p><strong>Git Integration</strong>: SSMS 21/22 includes Git and GitHub integration. You can initialize a local repository, commit script changes, push to GitHub, and track historical changes to your SQL files directly within SSMS. This is accessible from the Git menu. For teams that version-control their database scripts, this eliminates the need to switch to a separate Git client.</p>
<h3 id="sql-profiler-and-extended-events">SQL Profiler and Extended Events</h3>
<p><strong>SQL Profiler</strong> is the legacy tracing tool included with SSMS. Launch it from Tools &gt; SQL Server Profiler. It lets you capture a real-time stream of events happening on the server: query executions, RPC calls, logins, errors, deadlocks, and more.</p>
<p>To use SQL Profiler effectively: create a new trace, connect to your server, and in the &quot;Events Selection&quot; tab, be selective about what you capture. Capturing everything will generate massive amounts of data and impose significant overhead on the server. For a typical debugging session, include these events:</p>
<ul>
<li><strong>SQL:BatchCompleted</strong> — captures the text of each completed batch along with duration, CPU, reads, and writes</li>
<li><strong>RPC:Completed</strong> — captures stored procedure calls (this is what you see from parameterized queries sent by EF Core or Dapper)</li>
<li><strong>Showplan XML</strong> — captures the actual execution plan for each query (high overhead, use sparingly)</li>
<li><strong>Deadlock graph</strong> — captures the XML deadlock graph whenever a deadlock occurs</li>
</ul>
<p>In the &quot;Column Filters&quot; tab, filter by DatabaseName (to avoid capturing system database activity), Duration (set a minimum to only capture slow queries), and ApplicationName (to isolate traffic from your specific application).</p>
<p><strong>Important</strong>: SQL Profiler is deprecated. Microsoft recommends using Extended Events instead. However, Profiler remains included in SSMS and is still the quickest way to answer &quot;what queries is my application actually sending to the server?&quot; during development. Just do not run Profiler against a production server under heavy load — the overhead is real and can cause performance problems.</p>
<p><strong>Extended Events</strong> (XEvents) is the modern replacement for Profiler. It is built into the SQL Server engine and has dramatically lower overhead. In SSMS, expand your server in Object Explorer, go to Management &gt; Extended Events &gt; Sessions. You can create new sessions through the GUI (New Session Wizard or New Session dialog) or with T-SQL.</p>
<p>A common Extended Events session for development captures slow queries:</p>
<pre><code class="language-sql">CREATE EVENT SESSION [SlowQueries] ON SERVER
ADD EVENT sqlserver.sql_batch_completed (
    SET collect_batch_text = 1
    ACTION (
        sqlserver.sql_text,
        sqlserver.database_name,
        sqlserver.client_app_name,
        sqlserver.session_id
    )
    WHERE duration &gt; 1000000  -- 1 second in microseconds
)
ADD TARGET package0.event_file (
    SET filename = N'SlowQueries.xel',
        max_file_size = 50  -- MB
)
WITH (
    MAX_MEMORY = 4096 KB,
    EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
    MAX_DISPATCH_LATENCY = 5 SECONDS,
    STARTUP_STATE = ON
);
GO

ALTER EVENT SESSION [SlowQueries] ON SERVER STATE = START;
</code></pre>
<p>You can then view the captured events by right-clicking the session in Object Explorer and choosing &quot;Watch Live Data&quot; for a real-time feed, or double-clicking the event file target to open captured data in the SSMS viewer with full filtering and grouping capabilities.</p>
<p>For deadlock analysis, SQL Server maintains a built-in Extended Events session called <code>system_health</code> that captures deadlock graphs among other diagnostic events. You can query it:</p>
<pre><code class="language-sql">SELECT
    xdr.value('@timestamp', 'datetime2') AS deadlock_time,
    xdr.query('.') AS deadlock_graph
FROM (
    SELECT CAST(target_data AS XML) AS target_data
    FROM sys.dm_xe_session_targets st
    JOIN sys.dm_xe_sessions s ON s.address = st.event_session_address
    WHERE s.name = 'system_health'
      AND st.target_name = 'ring_buffer'
) AS data
CROSS APPLY target_data.nodes('//RingBufferTarget/event[@name=&quot;xml_deadlock_report&quot;]') AS XEventData(xdr);
</code></pre>
<hr />
<h2 id="part-3-working-with-sql-server-from-the-terminal">Part 3: Working with SQL Server from the Terminal</h2>
<p>Not every interaction with SQL Server requires opening SSMS. For scripting, automation, CI/CD pipelines, and quick checks, the command line is often faster.</p>
<h3 id="sqlcmd-the-classic-and-the-modern">sqlcmd — The Classic and the Modern</h3>
<p>There are two variants of sqlcmd:</p>
<p><strong>sqlcmd (ODBC)</strong> is the traditional command-line utility that ships with SQL Server and the ODBC driver. It has been around for decades.</p>
<p><strong>sqlcmd (Go)</strong> — also called go-sqlcmd — is the modern, cross-platform replacement built on the go-mssqldb driver. It runs on Windows, macOS, and Linux. It is open source under the MIT license. Install it with:</p>
<pre><code>winget install sqlcmd
</code></pre>
<p>Or on macOS:</p>
<pre><code>brew install sqlcmd
</code></pre>
<p>Or on Linux via the Microsoft package repository. The Go variant supports all the same commands as the ODBC version plus additional features: syntax coloring in the terminal, vertical result format (much easier to read wide rows), Docker container management (<code>sqlcmd create mssql</code> spins up a SQL Server container), and broader Microsoft Entra authentication support.</p>
<h3 id="connecting">Connecting</h3>
<p>Connect with Windows Authentication to a local default instance:</p>
<pre><code>sqlcmd -S localhost -E
</code></pre>
<p>Connect with SQL Authentication:</p>
<pre><code>sqlcmd -S myserver.database.windows.net -U myuser
</code></pre>
<p>The Go variant no longer accepts <code>-P</code> on the command line for the password (security improvement). It prompts you, or you can set the <code>SQLCMDPASSWORD</code> environment variable.</p>
<p>Connect to a named instance:</p>
<pre><code>sqlcmd -S localhost\SQLEXPRESS
</code></pre>
<p>Connect using a specific protocol:</p>
<pre><code>sqlcmd -S tcp:myserver,1433
sqlcmd -S np:\\myserver\pipe\sql\query
</code></pre>
<h3 id="running-queries">Running Queries</h3>
<p>Interactive mode:</p>
<pre><code>1&gt; SELECT name, database_id FROM sys.databases;
2&gt; GO
</code></pre>
<p>The <code>GO</code> keyword is the batch terminator — it tells sqlcmd to send everything typed so far to the server. <code>GO</code> is not a T-SQL keyword; it is a client-side command recognized by sqlcmd and SSMS.</p>
<p>Run a single query and exit:</p>
<pre><code>sqlcmd -S localhost -d MyDatabase -Q &quot;SELECT TOP 10 * FROM Customers&quot;
</code></pre>
<p>Run a script file:</p>
<pre><code>sqlcmd -S localhost -d MyDatabase -i deploy_schema.sql -o results.txt
</code></pre>
<p>Run multiple script files in order:</p>
<pre><code>sqlcmd -S localhost -i schema.sql data.sql indexes.sql
</code></pre>
<p>Use scripting variables:</p>
<pre><code>sqlcmd -S localhost -v DatabaseName=&quot;Production&quot; -i create_db.sql
</code></pre>
<p>In the script, reference the variable as <code>$(DatabaseName)</code>.</p>
<h3 id="piping-and-automation">Piping and Automation</h3>
<p>You can pipe SQL directly:</p>
<pre><code>echo &quot;SELECT @@VERSION&quot; | sqlcmd -S localhost
</code></pre>
<p>This is useful in shell scripts and CI pipelines. When piping input, <code>GO</code> batch terminators are optional — sqlcmd automatically executes the batch when input ends.</p>
<h3 id="checking-your-connection">Checking Your Connection</h3>
<p>Once connected, useful diagnostic queries:</p>
<pre><code class="language-sql">-- What version am I connected to?
SELECT @@VERSION;
GO

-- What protocol am I using?
SELECT net_transport
FROM sys.dm_exec_connections
WHERE session_id = @@SPID;
GO

-- What database am I in?
SELECT DB_NAME();
GO

-- What login am I?
SELECT SUSER_SNAME();
GO
</code></pre>
<h3 id="powershell-integration">PowerShell Integration</h3>
<p>The <code>Invoke-Sqlcmd</code> cmdlet (part of the SqlServer PowerShell module) lets you run queries from PowerShell:</p>
<pre><code class="language-powershell">Install-Module -Name SqlServer
Invoke-Sqlcmd -ServerInstance &quot;localhost&quot; -Database &quot;MyDb&quot; -Query &quot;SELECT TOP 5 * FROM Products&quot;
</code></pre>
<p>The SqlServer module also includes cmdlets for backup, restore, reading error logs, and managing availability groups.</p>
<h3 id="docker-for-development">Docker for Development</h3>
<p>The Go sqlcmd can spin up a SQL Server container in seconds:</p>
<pre><code>sqlcmd create mssql --accept-eula --tag 2025-latest
</code></pre>
<p>This pulls the SQL Server 2025 container image, starts it, and connects sqlcmd to it. You can also restore a sample database in the same command:</p>
<pre><code>sqlcmd create mssql --accept-eula --tag 2025-latest --using https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak
</code></pre>
<p>For .NET developers, this is the fastest way to get a throwaway SQL Server instance for integration tests.</p>
<hr />
<h2 id="part-4-t-sql-deep-dive">Part 4: T-SQL Deep Dive</h2>
<p>T-SQL (Transact-SQL) is Microsoft's extension of the SQL standard. As a .NET developer, even if you primarily use EF Core, you need to understand T-SQL for performance tuning, debugging, migrations, and anything that EF Core does not express cleanly.</p>
<h3 id="data-types-choosing-correctly">Data Types — Choosing Correctly</h3>
<p>Use the narrowest appropriate data type. <code>INT</code> when you need 4 bytes, <code>BIGINT</code> when you need 8, <code>SMALLINT</code> or <code>TINYINT</code> when values fit. For monetary values, use <code>DECIMAL(19,4)</code> or <code>MONEY</code> — never <code>FLOAT</code> or <code>REAL</code>, which have floating-point precision issues. For dates, use <code>DATE</code> if you only need the date, <code>DATETIME2(0)</code> through <code>DATETIME2(7)</code> for date and time (with 0 to 7 fractional second digits), and <code>DATETIMEOFFSET</code> when you need timezone awareness. Avoid <code>DATETIME</code> for new development — it has only 3.33ms precision and wastes storage compared to <code>DATETIME2</code>.</p>
<p>For string columns, prefer <code>NVARCHAR</code> for user-facing text that may include international characters, and <code>VARCHAR</code> for ASCII-only data or when you use a UTF-8 collation (available since SQL Server 2019). Always specify a length — <code>NVARCHAR(100)</code> not <code>NVARCHAR(MAX)</code> — unless you truly need more than 4,000 characters. <code>MAX</code> columns cannot be part of an index key and have different storage behavior.</p>
<p>For SQL Server 2025, the new <code>JSON</code> data type stores JSON more efficiently than <code>NVARCHAR(MAX)</code>. The <code>VECTOR</code> data type stores embedding vectors for AI/ML workloads.</p>
<h3 id="common-table-expressions-ctes">Common Table Expressions (CTEs)</h3>
<p>CTEs make complex queries readable:</p>
<pre><code class="language-sql">WITH ActiveCustomers AS (
    SELECT CustomerID, Name, Email
    FROM Customers
    WHERE IsActive = 1
      AND LastOrderDate &gt; DATEADD(MONTH, -6, GETDATE())
),
OrderTotals AS (
    SELECT CustomerID, SUM(TotalAmount) AS LifetimeValue
    FROM Orders
    GROUP BY CustomerID
)
SELECT ac.Name, ac.Email, ot.LifetimeValue
FROM ActiveCustomers ac
JOIN OrderTotals ot ON ac.CustomerID = ot.CustomerID
WHERE ot.LifetimeValue &gt; 1000
ORDER BY ot.LifetimeValue DESC;
</code></pre>
<p>Recursive CTEs are indispensable for hierarchical data:</p>
<pre><code class="language-sql">WITH OrgChart AS (
    -- Anchor: top-level managers
    SELECT EmployeeID, Name, ManagerID, 0 AS Level
    FROM Employees
    WHERE ManagerID IS NULL

    UNION ALL

    -- Recursive: subordinates
    SELECT e.EmployeeID, e.Name, e.ManagerID, oc.Level + 1
    FROM Employees e
    JOIN OrgChart oc ON e.ManagerID = oc.EmployeeID
)
SELECT * FROM OrgChart
ORDER BY Level, Name
OPTION (MAXRECURSION 100);
</code></pre>
<h3 id="window-functions">Window Functions</h3>
<p>Window functions compute values across a set of rows related to the current row without collapsing the result set:</p>
<pre><code class="language-sql">SELECT
    OrderID,
    CustomerID,
    OrderDate,
    TotalAmount,
    SUM(TotalAmount) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
    ) AS RunningTotal,
    ROW_NUMBER() OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate DESC
    ) AS RecentOrderRank,
    LAG(TotalAmount) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
    ) AS PreviousOrderAmount,
    LEAD(TotalAmount) OVER (
        PARTITION BY CustomerID
        ORDER BY OrderDate
    ) AS NextOrderAmount
FROM Orders;
</code></pre>
<p>The <code>ROWS BETWEEN</code> clause controls the window frame. <code>RANGE BETWEEN</code> is subtly different — it treats ties as part of the same frame. In SQL Server 2022 and later, the <code>WINDOW</code> clause lets you define named window specifications and reuse them:</p>
<pre><code class="language-sql">SELECT
    OrderID,
    CustomerID,
    SUM(TotalAmount) OVER w AS RunningTotal,
    AVG(TotalAmount) OVER w AS RunningAvg
FROM Orders
WINDOW w AS (
    PARTITION BY CustomerID
    ORDER BY OrderDate
    ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
);
</code></pre>
<h3 id="merge-statement">MERGE Statement</h3>
<p><code>MERGE</code> performs insert, update, and delete in a single atomic statement based on a source/target comparison:</p>
<pre><code class="language-sql">MERGE INTO Products AS target
USING StagingProducts AS source
ON target.SKU = source.SKU
WHEN MATCHED AND target.Price &lt;&gt; source.Price THEN
    UPDATE SET target.Price = source.Price, target.UpdatedAt = GETUTCDATE()
WHEN NOT MATCHED BY TARGET THEN
    INSERT (SKU, Name, Price, CreatedAt)
    VALUES (source.SKU, source.Name, source.Price, GETUTCDATE())
WHEN NOT MATCHED BY SOURCE THEN
    DELETE;
</code></pre>
<p>Always include the semicolon after <code>MERGE</code> — it is one of the few T-SQL statements that requires a terminating semicolon.</p>
<h3 id="error-handling">Error Handling</h3>
<p>Use <code>TRY...CATCH</code> blocks:</p>
<pre><code class="language-sql">BEGIN TRY
    BEGIN TRANSACTION;

    UPDATE Accounts SET Balance = Balance - 500 WHERE AccountID = 1;
    UPDATE Accounts SET Balance = Balance + 500 WHERE AccountID = 2;

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    IF @@TRANCOUNT &gt; 0
        ROLLBACK TRANSACTION;

    DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
    DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
    DECLARE @ErrorState INT = ERROR_STATE();
    DECLARE @ErrorLine INT = ERROR_LINE();
    DECLARE @ErrorProcedure NVARCHAR(200) = ERROR_PROCEDURE();

    -- Log the error
    INSERT INTO ErrorLog (Message, Severity, State, Line, Procedure, OccurredAt)
    VALUES (@ErrorMessage, @ErrorSeverity, @ErrorState, @ErrorLine, @ErrorProcedure, GETUTCDATE());

    -- Re-raise
    THROW;
END CATCH;
</code></pre>
<p><code>THROW</code> (introduced in SQL Server 2012) is preferred over <code>RAISERROR</code> for re-raising errors because it preserves the original error number, severity, and state. Use <code>RAISERROR</code> when you need to raise a custom error with a specific severity level.</p>
<h3 id="string-functions-old-and-new">String Functions — Old and New</h3>
<p>SQL Server 2022 and 2025 added string functions that developers had been requesting for years:</p>
<pre><code class="language-sql">-- TRIM (SQL Server 2017+)
SELECT TRIM('   hello   ');  -- 'hello'
SELECT TRIM('xy' FROM 'xyhelloyx');  -- 'hello' (SQL Server 2022+)

-- STRING_AGG (SQL Server 2017+)
SELECT DepartmentID, STRING_AGG(Name, ', ') AS Employees
FROM Employees
GROUP BY DepartmentID;

-- STRING_SPLIT with ordinal (SQL Server 2022+)
SELECT value, ordinal
FROM STRING_SPLIT('a,b,c', ',', 1);

-- GREATEST and LEAST (SQL Server 2022+)
SELECT GREATEST(10, 20, 5);   -- 20
SELECT LEAST(10, 20, 5);      -- 5

-- DATETRUNC (SQL Server 2022+)
SELECT DATETRUNC(MONTH, GETDATE());  -- First day of current month

-- GENERATE_SERIES (SQL Server 2022+)
SELECT value FROM GENERATE_SERIES(1, 10);
SELECT value FROM GENERATE_SERIES(1, 100, 5);  -- Step by 5
</code></pre>
<p>In SQL Server 2025, <code>REGEX</code> functions allow true regular expression matching without CLR:</p>
<pre><code class="language-sql">-- SQL Server 2025
SELECT *
FROM Customers
WHERE REGEX_LIKE(Email, '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$');
</code></pre>
<hr />
<h2 id="part-5-transactions-understanding-the-fundamentals">Part 5: Transactions — Understanding the Fundamentals</h2>
<p>Transactions are the mechanism that ensures data integrity. Every .NET developer must understand them.</p>
<h3 id="acid-properties">ACID Properties</h3>
<p><strong>Atomicity</strong>: All statements in a transaction succeed or all are rolled back. There is no partial commit. <strong>Consistency</strong>: The database moves from one valid state to another. Constraints, triggers, and cascades are enforced. <strong>Isolation</strong>: Concurrent transactions do not interfere with each other (the degree depends on the isolation level). <strong>Durability</strong>: Once committed, the data survives a crash — it is written to the transaction log on disk before the commit completes.</p>
<h3 id="implicit-vs.explicit-transactions">Implicit vs. Explicit Transactions</h3>
<p>By default, SQL Server operates in auto-commit mode: each individual statement is its own transaction. When you run <code>UPDATE Customers SET Name = 'Alice' WHERE CustomerID = 1</code>, SQL Server implicitly wraps it in a transaction, executes it, and commits. If the statement fails, it is automatically rolled back.</p>
<p>Explicit transactions use <code>BEGIN TRANSACTION</code>, <code>COMMIT</code>, and <code>ROLLBACK</code>:</p>
<pre><code class="language-sql">BEGIN TRANSACTION;

UPDATE Inventory SET Quantity = Quantity - 1 WHERE ProductID = 42;
INSERT INTO OrderItems (OrderID, ProductID, Quantity) VALUES (100, 42, 1);

COMMIT TRANSACTION;
</code></pre>
<p>If any statement between <code>BEGIN</code> and <code>COMMIT</code> fails and you do not catch it, the transaction remains open. Always use <code>TRY...CATCH</code> with explicit transactions, and always check <code>@@TRANCOUNT</code> in the <code>CATCH</code> block.</p>
<h3 id="save-points">Save Points</h3>
<p>Within a transaction, you can set save points to enable partial rollback:</p>
<pre><code class="language-sql">BEGIN TRANSACTION;

INSERT INTO Orders (CustomerID, OrderDate) VALUES (1, GETDATE());
SAVE TRANSACTION AfterOrderInsert;

BEGIN TRY
    INSERT INTO OrderItems (OrderID, ProductID, Quantity) VALUES (SCOPE_IDENTITY(), 99, 1);
END TRY
BEGIN CATCH
    -- Roll back only the failed insert, not the entire transaction
    ROLLBACK TRANSACTION AfterOrderInsert;
END CATCH;

COMMIT TRANSACTION;
</code></pre>
<h3 id="transaction-isolation-levels">Transaction Isolation Levels</h3>
<p>This is where many bugs live. The isolation level controls what concurrent transactions can see.</p>
<p><strong>READ UNCOMMITTED</strong>: The transaction can read data modified by other uncommitted transactions (dirty reads). This is the least restrictive level. Useful for rough estimates on data that is not critical.</p>
<p><strong>READ COMMITTED</strong> (default): The transaction can only read data that has been committed. However, if you read the same row twice, it might have changed between reads (non-repeatable reads), and new rows matching your WHERE clause might appear (phantom reads).</p>
<p><strong>REPEATABLE READ</strong>: Once a row is read, it cannot be modified by another transaction until the current transaction ends. This prevents non-repeatable reads but not phantom reads.</p>
<p><strong>SERIALIZABLE</strong>: The most restrictive level. Range locks are placed on the data, preventing other transactions from inserting rows that would match the current transaction's WHERE clauses. This prevents dirty reads, non-repeatable reads, and phantom reads, but it causes the most blocking and the highest risk of deadlocks.</p>
<p><strong>SNAPSHOT</strong>: Uses row versioning. When the transaction starts, it gets a consistent snapshot of the database as of that point in time. It can read without acquiring shared locks, so readers do not block writers and writers do not block readers. However, if the transaction tries to modify a row that has been modified by another transaction since the snapshot was taken, it gets an update conflict error.</p>
<p><strong>READ COMMITTED SNAPSHOT ISOLATION (RCSI)</strong>: A database-level option that changes the behavior of READ COMMITTED to use row versioning instead of shared locks. Readers get a snapshot as of the start of each individual statement (not the start of the transaction). This is the default behavior for Azure SQL Database and is strongly recommended for most OLTP workloads.</p>
<p>To enable RCSI:</p>
<pre><code class="language-sql">ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON;
</code></pre>
<p>This requires exclusive access to the database (no other connections). For production databases, coordinate a brief maintenance window.</p>
<h3 id="transaction-best-practices">Transaction Best Practices</h3>
<p>Keep transactions short. Every lock held by a transaction blocks other transactions. A transaction that holds locks for 30 seconds while calling an external API is a production incident waiting to happen. Do your external calls, computations, and validations outside the transaction, then enter the transaction only for the database writes.</p>
<p>Always set a transaction timeout in your application code:</p>
<pre><code class="language-csharp">using var connection = new SqlConnection(connectionString);
await connection.OpenAsync();
using var transaction = await connection.BeginTransactionAsync();
// SqlCommand.CommandTimeout = 30 seconds by default
</code></pre>
<p>In EF Core:</p>
<pre><code class="language-csharp">using var transaction = await dbContext.Database.BeginTransactionAsync();
try
{
    // ... operations
    await dbContext.SaveChangesAsync();
    await transaction.CommitAsync();
}
catch
{
    await transaction.RollbackAsync();
    throw;
}
</code></pre>
<hr />
<h2 id="part-6-locking-blocking-and-deadlocks">Part 6: Locking, Blocking, and Deadlocks</h2>
<h3 id="how-locking-works">How Locking Works</h3>
<p>SQL Server uses a multi-granularity locking system. Locks can be acquired at the row level, page level (8 KB), extent level (64 KB, 8 pages), table level, or database level. The engine starts with the finest granularity appropriate for the operation and may escalate to a coarser level if too many fine-grained locks are held (by default, escalation occurs at approximately 5,000 locks on a single table).</p>
<p>The main lock modes are: Shared (S) for reads, Exclusive (X) for writes, Update (U) for update operations (a transitional lock that converts to X when the actual modification happens), Intent locks (IS, IX, IU) that signal to higher-granularity lock checks that a finer-grained lock exists, and Schema locks (Sch-S and Sch-M) for DDL operations.</p>
<h3 id="the-nolock-debate-should-you-use-it">The NOLOCK Debate — Should You Use It?</h3>
<p><code>WITH (NOLOCK)</code> — equivalent to <code>SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED</code> for that table reference — is one of the most controversial hints in SQL Server.</p>
<p><strong>What NOLOCK does</strong>: It tells SQL Server to read data without acquiring shared locks, and to ignore exclusive locks held by other transactions. This means the query will never be blocked by a writer and will never block a writer.</p>
<p><strong>What can go wrong</strong>: Dirty reads (reading data from an uncommitted transaction that may later be rolled back — you would be working with data that never actually existed). Skipped rows or duplicate rows (if a page split occurs during an allocation order scan, the scan can miss rows that moved or encounter the same row twice). Errors (reading a page that is in the middle of being updated can cause incorrect column values or even errors).</p>
<p><strong>Development environment</strong>: Using NOLOCK is generally acceptable during development for ad hoc queries where you want quick answers and do not care about perfect accuracy. Running <code>SELECT COUNT(*) FROM LargeTable WITH (NOLOCK)</code> to get a rough row count is fine.</p>
<p><strong>Production reads</strong>: The answer depends on your workload. For a reporting query against a large table where an approximate result is acceptable and blocking readers would impact OLTP throughput, NOLOCK may be a pragmatic choice. But the better answer for most OLTP workloads is to enable Read Committed Snapshot Isolation (RCSI) at the database level. RCSI gives you non-blocking reads with transactional consistency — no dirty reads, no skipped or duplicate rows, no page-split anomalies. It costs some tempdb I/O for the version store, but this is almost always a good tradeoff.</p>
<p><strong>Production writes</strong>: Never use NOLOCK on the target of an UPDATE or DELETE. It does not apply there anyway — write operations always acquire exclusive locks.</p>
<p><strong>Recommendation</strong>: Enable RCSI on your databases and stop using NOLOCK. If you need historical consistency across multiple statements, use SNAPSHOT isolation.</p>
<h3 id="diagnosing-blocking">Diagnosing Blocking</h3>
<p>When queries hang, check for blocking:</p>
<pre><code class="language-sql">-- Who is blocking whom?
SELECT
    blocking.session_id AS BlockingSessionID,
    blocked.session_id AS BlockedSessionID,
    blocked.wait_type,
    blocked.wait_time / 1000 AS WaitSeconds,
    blocked_sql.text AS BlockedQuery,
    blocking_sql.text AS BlockingQuery
FROM sys.dm_exec_requests blocked
JOIN sys.dm_exec_sessions blocking
    ON blocked.blocking_session_id = blocking.session_id
CROSS APPLY sys.dm_exec_sql_text(blocked.sql_handle) blocked_sql
OUTER APPLY sys.dm_exec_sql_text(blocking.most_recent_sql_handle) blocking_sql
WHERE blocked.blocking_session_id &lt;&gt; 0;
</code></pre>
<h3 id="deadlocks">Deadlocks</h3>
<p>A deadlock occurs when two or more transactions each hold a lock that the other needs. SQL Server automatically detects deadlocks (via the lock monitor thread, which runs every 5 seconds by default) and kills one of the transactions (the deadlock victim, chosen based on cost to roll back).</p>
<p>To minimize deadlocks: access tables in the same order in all transactions, keep transactions short, use the lowest necessary isolation level, and avoid user interaction mid-transaction. If deadlocks persist, use the deadlock graph (from Extended Events or the <code>system_health</code> session) to identify the specific resources and queries involved, then redesign the access patterns.</p>
<p>In your .NET code, always handle deadlocks with a retry loop:</p>
<pre><code class="language-csharp">const int maxRetries = 3;
for (int attempt = 1; attempt &lt;= maxRetries; attempt++)
{
    try
    {
        await ExecuteTransactionAsync();
        return;
    }
    catch (SqlException ex) when (ex.Number == 1205) // Deadlock victim
    {
        if (attempt == maxRetries) throw;
        await Task.Delay(TimeSpan.FromMilliseconds(100 * attempt));
    }
}
</code></pre>
<h3 id="sql-server-2025-optimized-locking">SQL Server 2025 Optimized Locking</h3>
<p>SQL Server 2025 introduces Transaction ID (TID) locking, which changes how row locks are handled after qualification. Instead of holding a row lock for the duration of the transaction, the engine can release it earlier and use a lighter-weight TID lock. This reduces lock memory consumption and contention, particularly for high-concurrency workloads. The behavior is automatic on SQL Server 2025 — you do not need to change queries or hints.</p>
<hr />
<h2 id="part-7-indexing-best-practices">Part 7: Indexing Best Practices</h2>
<h3 id="clustered-index">Clustered Index</h3>
<p>Every table should have a clustered index. The clustered index defines the physical order of data on disk. For most tables, the primary key — typically an <code>INT IDENTITY</code> or <code>BIGINT IDENTITY</code> — is the clustered index. This gives you sequential inserts (minimizing page splits), narrow keys (4 or 8 bytes — important because every nonclustered index carries a copy of the clustered index key), and unique values.</p>
<p>Using a <code>GUID</code> (<code>UNIQUEIDENTIFIER</code>) as a clustered index key is almost always a mistake. <code>NEWID()</code> generates random values, causing random inserts across the entire B-tree, which leads to massive page splits, fragmentation, and terrible I/O performance. <code>NEWSEQUENTIALID()</code> mitigates this somewhat but is still 16 bytes wide. Use GUIDs as nonclustered index columns if you need them for distributed identity, but keep the clustered key narrow and sequential.</p>
<h3 id="nonclustered-indexes">Nonclustered Indexes</h3>
<p>Design nonclustered indexes based on your query patterns, not your table structure. The key columns should be the columns in your WHERE clause and JOIN conditions, ordered from most selective to least selective. Include columns (in the <code>INCLUDE</code> clause) for columns that are only in the SELECT list — this prevents key lookups.</p>
<pre><code class="language-sql">-- If your common query is:
SELECT OrderID, OrderDate, TotalAmount
FROM Orders
WHERE CustomerID = @CustID AND Status = 'Shipped'
ORDER BY OrderDate DESC;

-- Then create:
CREATE NONCLUSTERED INDEX IX_Orders_CustomerID_Status
ON Orders (CustomerID, Status)
INCLUDE (OrderDate, TotalAmount);
</code></pre>
<h3 id="filtered-indexes">Filtered Indexes</h3>
<p>If a column has a heavily skewed distribution (for example, 95% of rows have <code>Status = 'Completed'</code> and you only ever query for the other 5%), use a filtered index:</p>
<pre><code class="language-sql">CREATE NONCLUSTERED INDEX IX_Orders_Pending
ON Orders (CustomerID, OrderDate)
INCLUDE (TotalAmount)
WHERE Status IN ('Pending', 'Processing', 'Shipped');
</code></pre>
<p>This index is smaller, faster to maintain, and uses less memory.</p>
<h3 id="columnstore-indexes">Columnstore Indexes</h3>
<p>For analytical queries that scan large portions of a table, columnstore indexes provide order-of-magnitude performance improvements. They store data in a columnar format and use batch mode processing. You can add a nonclustered columnstore index alongside your rowstore indexes:</p>
<pre><code class="language-sql">CREATE NONCLUSTERED COLUMNSTORE INDEX NCCI_Orders_Analytics
ON Orders (CustomerID, OrderDate, TotalAmount, Status);
</code></pre>
<h3 id="missing-index-dmvs">Missing Index DMVs</h3>
<p>SQL Server tracks queries that could benefit from an index:</p>
<pre><code class="language-sql">SELECT
    mig.index_group_handle,
    mid.statement AS TableName,
    mid.equality_columns,
    mid.inequality_columns,
    mid.included_columns,
    migs.unique_compiles,
    migs.user_seeks,
    migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) AS ImprovementScore
FROM sys.dm_db_missing_index_groups mig
JOIN sys.dm_db_missing_index_group_stats migs ON mig.index_group_handle = migs.group_handle
JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
ORDER BY ImprovementScore DESC;
</code></pre>
<p>Do not blindly create every missing index — review them for overlap with existing indexes, consolidate where possible, and consider the write overhead of maintaining additional indexes.</p>
<hr />
<h2 id="part-8-networking-sessions-and-connection-management">Part 8: Networking, Sessions, and Connection Management</h2>
<h3 id="sql-server-network-configuration">SQL Server Network Configuration</h3>
<p>SQL Server listens on one or more network protocols: TCP/IP (the most common, default port 1433), Named Pipes (for local or intranet connections), and Shared Memory (local connections only). Configure these in SQL Server Configuration Manager.</p>
<p>For production, use TCP/IP exclusively. Ensure the firewall allows inbound connections on port 1433 (or your custom port). If you use a named instance, it uses a dynamic port assigned by the SQL Server Browser service (which listens on UDP 1434). For production named instances, assign a static port in Configuration Manager.</p>
<h3 id="connection-strings-from.net">Connection Strings from .NET</h3>
<p>A typical ASP.NET Core connection string:</p>
<pre><code>Server=myserver.database.windows.net;Database=MyApp;User Id=myuser;Password=mypassword;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
</code></pre>
<p>Key parameters to understand: <code>Encrypt=True</code> enables TLS encryption (mandatory for Azure SQL, strongly recommended for all production servers). <code>TrustServerCertificate=False</code> (the default) validates the server certificate — set this to <code>True</code> only for development with self-signed certificates. <code>Connection Timeout=30</code> is the maximum time to wait for a connection from the pool. <code>Max Pool Size=100</code> (default) is the maximum number of connections in the pool. <code>MultipleActiveResultSets=True</code> allows multiple open readers on a single connection (required by some EF Core patterns, but adds overhead).</p>
<h3 id="connection-pooling">Connection Pooling</h3>
<p>ADO.NET (and by extension EF Core and Dapper) uses connection pooling by default. When you close a connection in code, it is returned to the pool — not actually closed. When you open a connection, the pool gives you an existing one if available. This is why it is critical to always dispose of <code>SqlConnection</code> objects promptly (use <code>using</code> statements).</p>
<p>If your application hits <code>Max Pool Size</code> and all connections are in use, the next <code>OpenAsync()</code> call will block until a connection is returned or the connection timeout expires, at which point you get a <code>TimeoutException</code>. This almost always means you have a connection leak — some code path is opening a connection without closing/disposing it.</p>
<p>Monitor pool usage:</p>
<pre><code class="language-sql">SELECT
    DB_NAME(dbid) AS DatabaseName,
    COUNT(*) AS ConnectionCount,
    loginame AS LoginName,
    hostname AS HostName,
    program_name AS Application
FROM sys.sysprocesses
GROUP BY dbid, loginame, hostname, program_name
ORDER BY ConnectionCount DESC;
</code></pre>
<p>Or with the modern DMV:</p>
<pre><code class="language-sql">SELECT
    s.session_id,
    s.login_name,
    s.host_name,
    s.program_name,
    c.connect_time,
    c.net_transport,
    c.protocol_type,
    c.encrypt_option,
    s.status,
    s.last_request_start_time,
    s.last_request_end_time,
    r.command,
    r.wait_type,
    r.blocking_session_id
FROM sys.dm_exec_sessions s
LEFT JOIN sys.dm_exec_connections c ON s.session_id = c.session_id
LEFT JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
WHERE s.is_user_process = 1
ORDER BY s.last_request_start_time DESC;
</code></pre>
<h3 id="session-management">Session Management</h3>
<p>Every connection to SQL Server creates a session. Useful session-level settings:</p>
<pre><code class="language-sql">SET NOCOUNT ON;              -- Suppress &quot;N rows affected&quot; messages (reduces network traffic)
SET XACT_ABORT ON;           -- Auto-rollback the transaction on any error
SET ARITHABORT ON;           -- Required for indexed views and computed columns
SET ANSI_NULLS ON;           -- NULL comparisons follow ANSI standard
SET QUOTED_IDENTIFIER ON;    -- Double quotes delimit identifiers, not strings
</code></pre>
<p><code>SET XACT_ABORT ON</code> is particularly important. Without it, some errors (like constraint violations) leave the transaction open, and subsequent statements execute as if nothing happened. With <code>XACT_ABORT ON</code>, any error immediately rolls back the entire transaction. Always set this at the beginning of stored procedures.</p>
<h3 id="killing-sessions">Killing Sessions</h3>
<p>If a session is blocking others and needs to be terminated:</p>
<pre><code class="language-sql">KILL 52;  -- 52 is the session_id
</code></pre>
<p>Use this judiciously — killing a session that is mid-transaction causes a rollback, which can take time proportional to the work already done.</p>
<hr />
<h2 id="part-9-debugging-production-issues">Part 9: Debugging Production Issues</h2>
<h3 id="dynamic-management-views-dmvs">Dynamic Management Views (DMVs)</h3>
<p>DMVs are your primary diagnostic tool for production SQL Server. They expose internal state without the overhead of profiling.</p>
<p><strong>Currently executing queries:</strong></p>
<pre><code class="language-sql">SELECT
    r.session_id,
    r.status,
    r.command,
    r.wait_type,
    r.wait_time,
    r.blocking_session_id,
    r.cpu_time,
    r.logical_reads,
    r.total_elapsed_time / 1000 AS ElapsedSeconds,
    SUBSTRING(t.text, r.statement_start_offset / 2 + 1,
        (CASE WHEN r.statement_end_offset = -1
            THEN LEN(CONVERT(NVARCHAR(MAX), t.text)) * 2
            ELSE r.statement_end_offset END - r.statement_start_offset) / 2 + 1
    ) AS CurrentStatement,
    p.query_plan
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) p
WHERE r.session_id &gt; 50  -- Exclude system sessions
ORDER BY r.total_elapsed_time DESC;
</code></pre>
<p><strong>Top queries by CPU (historical, from plan cache):</strong></p>
<pre><code class="language-sql">SELECT TOP 20
    qs.total_worker_time / qs.execution_count AS AvgCPU,
    qs.total_worker_time AS TotalCPU,
    qs.execution_count,
    qs.total_logical_reads / qs.execution_count AS AvgReads,
    SUBSTRING(t.text, qs.statement_start_offset / 2 + 1,
        (CASE WHEN qs.statement_end_offset = -1
            THEN LEN(CONVERT(NVARCHAR(MAX), t.text)) * 2
            ELSE qs.statement_end_offset END - qs.statement_start_offset) / 2 + 1
    ) AS QueryText
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) t
ORDER BY AvgCPU DESC;
</code></pre>
<p><strong>Wait statistics (what is the server waiting on?):</strong></p>
<pre><code class="language-sql">SELECT TOP 20
    wait_type,
    waiting_tasks_count,
    wait_time_ms / 1000 AS WaitSeconds,
    signal_wait_time_ms / 1000 AS SignalWaitSeconds,
    (wait_time_ms - signal_wait_time_ms) / 1000 AS ResourceWaitSeconds
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (
    'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_EVENTHANDLER',
    'CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT', 'LAZYWRITER_SLEEP',
    'SQLTRACE_BUFFER_FLUSH', 'WAITFOR', 'XE_TIMER_EVENT',
    'BROKER_TO_FLUSH', 'BROKER_RECEIVE_WAITFOR', 'CHECKPOINT_QUEUE',
    'REQUEST_FOR_DEADLOCK_SEARCH', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
    'XE_DISPATCHER_WAIT', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
    'DIRTY_PAGE_POLL', 'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
    'SP_SERVER_DIAGNOSTICS_SLEEP'
)
AND waiting_tasks_count &gt; 0
ORDER BY wait_time_ms DESC;
</code></pre>
<p>Common wait types and what they mean: <code>PAGEIOLATCH_SH</code> (waiting for a data page to be read from disk — indicates memory pressure or slow I/O), <code>LCK_M_X</code> or <code>LCK_M_S</code> (waiting for a lock — blocking), <code>CXPACKET</code> or <code>CXCONSUMER</code> (parallelism waits — often normal, but excessive amounts may indicate skewed parallelism), <code>WRITELOG</code> (waiting for the transaction log to be written to disk — check log disk performance), <code>SOS_SCHEDULER_YIELD</code> (CPU pressure — the server needs more CPU or query tuning).</p>
<h3 id="index-fragmentation">Index Fragmentation</h3>
<p>Check fragmentation for a specific table:</p>
<pre><code class="language-sql">SELECT
    i.name AS IndexName,
    ps.avg_fragmentation_in_percent,
    ps.page_count,
    ps.record_count
FROM sys.dm_db_index_physical_stats(
    DB_ID(), OBJECT_ID('dbo.Orders'), NULL, NULL, 'LIMITED'
) ps
JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id
WHERE ps.page_count &gt; 1000  -- Only look at indexes with meaningful size
ORDER BY ps.avg_fragmentation_in_percent DESC;
</code></pre>
<p>Below 10% fragmentation: do nothing. Between 10% and 30%: reorganize (<code>ALTER INDEX ... REORGANIZE</code>). Above 30%: rebuild (<code>ALTER INDEX ... REBUILD</code>). Reorganize is an online, incremental operation. Rebuild is more thorough but takes a schema lock (unless you use <code>ONLINE = ON</code>, which requires Enterprise edition or SQL Server 2025 Standard).</p>
<h3 id="tempdb-monitoring">tempdb Monitoring</h3>
<p>tempdb is a shared resource used for temporary tables, table variables, sort spill, hash join spill, version store (for RCSI and snapshot isolation), and internal engine operations. If tempdb runs out of space or has contention, everything on the server slows down.</p>
<pre><code class="language-sql">SELECT
    SUM(unallocated_extent_page_count) * 8 / 1024 AS FreeSpaceMB,
    SUM(internal_object_reserved_page_count) * 8 / 1024 AS InternalObjectsMB,
    SUM(user_object_reserved_page_count) * 8 / 1024 AS UserObjectsMB,
    SUM(version_store_reserved_page_count) * 8 / 1024 AS VersionStoreMB
FROM sys.dm_db_file_space_usage;
</code></pre>
<hr />
<h2 id="part-10-best-practices-checklist-for.net-developers">Part 10: Best Practices Checklist for .NET Developers</h2>
<h3 id="database-design">Database Design</h3>
<p>Always use schemas (<code>dbo</code>, <code>sales</code>, <code>hr</code>) to organize objects. Do not put everything in <code>dbo</code>. Use meaningful, consistent naming conventions — <code>PascalCase</code> for tables and columns is the most common in .NET shops. Every table gets a clustered primary key. Use foreign keys to enforce referential integrity — do not rely on application code alone. Add appropriate check constraints.</p>
<h3 id="stored-procedures-vs.inline-sql-vs.ef-core">Stored Procedures vs. Inline SQL vs. EF Core</h3>
<p>There is no universal answer. EF Core is excellent for CRUD operations, migrations, and applications where developer productivity matters most. Raw SQL (via Dapper or <code>SqlCommand</code>) is appropriate for complex queries, bulk operations, or performance-critical paths where you need full control over the T-SQL. Stored procedures are appropriate when you need to encapsulate complex business logic at the database layer, when security requirements mandate that the application cannot issue ad hoc SQL, or when you need to share logic across multiple applications.</p>
<p>If you use EF Core, always monitor the generated SQL using logging:</p>
<pre><code class="language-csharp">optionsBuilder.LogTo(Console.WriteLine, LogLevel.Information)
              .EnableSensitiveDataLogging();
</code></pre>
<p>Look for N+1 query patterns (a query for each item in a loop instead of a single query with <code>Include</code>), unnecessary columns being fetched (use <code>Select</code> projections), and queries that pull the entire table into memory instead of filtering at the database.</p>
<h3 id="connection-handling">Connection Handling</h3>
<p>Always use <code>using</code> statements or <code>await using</code> for connections, commands, and readers. Never hold a connection open across an HTTP request boundary (open late, close early). Do not increase <code>Max Pool Size</code> to mask a connection leak — find and fix the leak.</p>
<h3 id="parameterized-queries-always">Parameterized Queries — Always</h3>
<p>Never concatenate user input into SQL strings. Always use parameters:</p>
<pre><code class="language-csharp">// WRONG — SQL injection vulnerability
var sql = $&quot;SELECT * FROM Users WHERE Name = '{userName}'&quot;;

// RIGHT
var sql = &quot;SELECT * FROM Users WHERE Name = @Name&quot;;
cmd.Parameters.AddWithValue(&quot;@Name&quot;, userName);

// BETTER — explicit type
cmd.Parameters.Add(&quot;@Name&quot;, SqlDbType.NVarChar, 100).Value = userName;
</code></pre>
<p>EF Core handles parameterization automatically, but if you use <code>FromSqlRaw</code>, make sure to use parameter placeholders.</p>
<h3 id="monitoring-and-alerting">Monitoring and Alerting</h3>
<p>Set up alerts for: long-running queries (over N seconds), deadlocks, tempdb space usage, log file growth, failed logins, and database integrity check failures (<code>DBCC CHECKDB</code>). Use SQL Server Agent alerts, Azure Monitor, or your preferred monitoring stack.</p>
<p>Run <code>DBCC CHECKDB</code> on a schedule. It detects physical and logical corruption. For large databases, run it weekly during a maintenance window. For critical databases, run it daily.</p>
<h3 id="backup-and-recovery">Backup and Recovery</h3>
<p>Test your backups by restoring them. A backup you have never tested is not a backup — it is a hope. Understand the difference between full backups, differential backups (changes since the last full backup), and transaction log backups (changes since the last log backup). For point-in-time recovery, you need the full recovery model and a chain of log backups.</p>
<p>In your .NET application, handle transient failures (network blips, failovers) with retry policies. The <code>Microsoft.Data.SqlClient</code> library supports configurable retry logic.</p>
<h3 id="statistics">Statistics</h3>
<p>SQL Server uses statistics (histograms of data distribution) to make query plan decisions. If statistics are stale, the optimizer makes bad choices. Auto-update statistics is enabled by default, but it triggers only after approximately 20% of the rows have changed (with a lower threshold for larger tables in SQL Server 2016+ with trace flag 2371, which is default behavior in SQL Server 2022+).</p>
<p>For tables with skewed distributions or after large data loads, manually update statistics:</p>
<pre><code class="language-sql">UPDATE STATISTICS dbo.Orders WITH FULLSCAN;
</code></pre>
<p>Or for all tables:</p>
<pre><code class="language-sql">EXEC sp_updatestats;
</code></pre>
<h3 id="maintenance-plans">Maintenance Plans</h3>
<p>Set up regular maintenance: index reorganize/rebuild (weekly), statistics update (daily or after large data changes), <code>DBCC CHECKDB</code> (weekly), and cleanup of old backup files, job history, and maintenance plan reports. Ola Hallengren's maintenance solution (free, open source) is the gold standard for automated index and statistics maintenance.</p>
<hr />
<h2 id="part-11-sql-server-from-c-practical-patterns">Part 11: SQL Server from C# — Practical Patterns</h2>
<h3 id="dapper-for-performance-critical-paths">Dapper for Performance-Critical Paths</h3>
<pre><code class="language-csharp">using Dapper;

await using var connection = new SqlConnection(connectionString);
var orders = await connection.QueryAsync&lt;Order&gt;(
    @&quot;SELECT OrderID, CustomerID, OrderDate, TotalAmount
      FROM Orders
      WHERE CustomerID = @CustomerId AND OrderDate &gt; @Since&quot;,
    new { CustomerId = 42, Since = DateTime.UtcNow.AddMonths(-6) }
);
</code></pre>
<h3 id="bulk-operations">Bulk Operations</h3>
<p>For inserting thousands of rows, do not use individual INSERT statements or even EF Core's <code>AddRange</code>. Use <code>SqlBulkCopy</code>:</p>
<pre><code class="language-csharp">using var bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.TableLock, null);
bulkCopy.DestinationTableName = &quot;dbo.StagingOrders&quot;;
bulkCopy.BatchSize = 10000;
await bulkCopy.WriteToServerAsync(dataTable);
</code></pre>
<p>For EF Core 7+, the <code>ExecuteUpdate</code> and <code>ExecuteDelete</code> methods generate set-based UPDATE and DELETE statements, avoiding the per-row overhead:</p>
<pre><code class="language-csharp">await dbContext.Orders
    .Where(o =&gt; o.Status == &quot;Cancelled&quot; &amp;&amp; o.OrderDate &lt; cutoff)
    .ExecuteDeleteAsync();
</code></pre>
<h3 id="resilience-with-microsoft.data.sqlclient">Resilience with Microsoft.Data.SqlClient</h3>
<pre><code class="language-csharp">var options = new SqlRetryLogicOption
{
    NumberOfTries = 3,
    DeltaTime = TimeSpan.FromSeconds(1),
    MaxTimeInterval = TimeSpan.FromSeconds(20),
    TransientErrors = new[] { 1205, 49920, 49919 } // Deadlock, throttled, etc.
};
var retryProvider = SqlConfigurableRetryFactory.CreateExponentialRetryProvider(options);

using var connection = new SqlConnection(connectionString);
connection.RetryLogicProvider = retryProvider;
</code></pre>
<hr />
<h2 id="part-12-security-essentials">Part 12: Security Essentials</h2>
<h3 id="principle-of-least-privilege">Principle of Least Privilege</h3>
<p>Your application's database login should have only the permissions it needs. Create a dedicated login and database user:</p>
<pre><code class="language-sql">CREATE LOGIN AppUser WITH PASSWORD = 'StrongPassword123!';
CREATE USER AppUser FOR LOGIN AppUser;

-- Grant specific permissions
GRANT SELECT, INSERT, UPDATE, DELETE ON SCHEMA::dbo TO AppUser;
-- Or for stored procedures:
GRANT EXECUTE ON SCHEMA::dbo TO AppUser;
</code></pre>
<p>Never use <code>sa</code> or <code>db_owner</code> for application connections.</p>
<h3 id="always-encrypted">Always Encrypted</h3>
<p>For columns containing sensitive data (SSN, credit card numbers), use Always Encrypted. The encryption keys are managed by the client driver (your .NET application) and the database engine never sees the plaintext. Configure this through SSMS: right-click the database, choose Tasks &gt; Manage Always Encrypted Keys, then right-click the table and choose Encrypt Columns.</p>
<p>In your connection string, add <code>Column Encryption Setting=Enabled</code>.</p>
<h3 id="row-level-security">Row-Level Security</h3>
<p>Create a predicate function and a security policy to filter rows based on the current user:</p>
<pre><code class="language-sql">CREATE FUNCTION dbo.fn_TenantFilter(@TenantID INT)
RETURNS TABLE
WITH SCHEMABINDING
AS
    RETURN SELECT 1 AS Result
    WHERE @TenantID = CAST(SESSION_CONTEXT(N'TenantID') AS INT);

CREATE SECURITY POLICY dbo.TenantPolicy
ADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID) ON dbo.Orders;
</code></pre>
<p>In your .NET middleware, set the session context for each request:</p>
<pre><code class="language-csharp">await using var cmd = connection.CreateCommand();
cmd.CommandText = &quot;EXEC sp_set_session_context @key = N'TenantID', @value = @TenantID&quot;;
cmd.Parameters.AddWithValue(&quot;@TenantID&quot;, currentTenantId);
await cmd.ExecuteNonQueryAsync();
</code></pre>
<h3 id="transparent-data-encryption-tde">Transparent Data Encryption (TDE)</h3>
<p>TDE encrypts the database files at rest — the data files, log files, and backups are encrypted on disk. Enable it in one command:</p>
<pre><code class="language-sql">CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyServerCert;

ALTER DATABASE MyDatabase SET ENCRYPTION ON;
</code></pre>
<p>This is transparent to the application — no code changes needed.</p>
<hr />
<h2 id="part-13-performance-tuning-workflow">Part 13: Performance Tuning Workflow</h2>
<p>When a query is slow, follow this systematic approach:</p>
<ol>
<li><strong>Get the actual execution plan</strong> (<code>Ctrl+M</code> in SSMS, then <code>F5</code>).</li>
<li><strong>Look at the actual vs. estimated rows</strong> for each operator. Large discrepancies indicate statistics problems.</li>
<li><strong>Identify the most expensive operators</strong> (the ones with the highest percentage cost).</li>
<li><strong>Check for Key Lookups</strong> — add INCLUDE columns to the relevant nonclustered index.</li>
<li><strong>Check for Table Scans on large tables</strong> — determine if an index would help.</li>
<li><strong>Check for implicit conversions</strong> — look for yellow warning triangles on operators. A common cause is comparing an <code>NVARCHAR</code> parameter against a <code>VARCHAR</code> column, which forces a scan because the engine must convert every row.</li>
<li><strong>Check wait statistics</strong> for the specific query — is it waiting on I/O, locks, memory, or CPU?</li>
<li><strong>Review the Query Store</strong> for plan regression — did this query used to be fast with a different plan?</li>
<li><strong>Update statistics</strong> with <code>FULLSCAN</code> if they appear stale.</li>
<li><strong>Consider rewriting the query</strong> — sometimes a different approach (replacing a correlated subquery with a JOIN, breaking a complex query into CTEs, or using <code>EXISTS</code> instead of <code>IN</code>) changes the plan dramatically.</li>
</ol>
<hr />
<h2 id="part-14-sql-server-2025-ai-features-for.net-developers">Part 14: SQL Server 2025 AI Features for .NET Developers</h2>
<p>SQL Server 2025 brings AI capabilities that .NET developers can use directly from their existing codebase.</p>
<h3 id="vector-search">Vector Search</h3>
<p>Store and search embeddings directly in SQL Server:</p>
<pre><code class="language-sql">CREATE TABLE Documents (
    DocumentID INT IDENTITY PRIMARY KEY,
    Title NVARCHAR(200),
    Content NVARCHAR(MAX),
    Embedding VECTOR(1536)  -- 1536 dimensions, matching OpenAI ada-002
);

-- Find similar documents by cosine similarity
SELECT TOP 10
    DocumentID,
    Title,
    VECTOR_DISTANCE('cosine', Embedding, @QueryEmbedding) AS Distance
FROM Documents
ORDER BY VECTOR_DISTANCE('cosine', Embedding, @QueryEmbedding);
</code></pre>
<p>From C#, generate the embedding using an AI service (Azure OpenAI, for example), then pass it as a parameter.</p>
<h3 id="rest-endpoint-calls-from-t-sql">REST Endpoint Calls from T-SQL</h3>
<p>Call external APIs directly from the database:</p>
<pre><code class="language-sql">DECLARE @response NVARCHAR(MAX);
DECLARE @url NVARCHAR(4000) = 'https://api.example.com/enrich';

EXEC sp_invoke_external_rest_endpoint
    @url = @url,
    @method = 'POST',
    @payload = N'{&quot;customerId&quot;: 42}',
    @response = @response OUTPUT;
</code></pre>
<p>This enables scenarios like data enrichment, webhook notifications, and AI model inference directly from T-SQL stored procedures.</p>
<hr />
<h2 id="conclusion">Conclusion</h2>
<p>SQL Server is a deep, powerful, and continuously evolving database engine. As a .NET developer, your relationship with SQL Server goes far beyond writing LINQ queries. Understanding how the engine works — from locking and transactions to execution plans and indexing — makes you a dramatically more effective developer. It is the difference between guessing why something is slow and knowing.</p>
<p>SQL Server 2025 is the most capable release yet, with native JSON, vector search, REGEX, optimized locking, and AI integration. SSMS 22 gives you a modern, 64-bit environment with Copilot assistance and first-class support for all these new features. The go-sqlcmd tool makes command-line interactions seamless across Windows, macOS, and Linux.</p>
<p>Invest the time to learn these tools and concepts. Your future self — debugging a production issue at 2 AM or optimizing a critical query path — will thank you.</p>
]]></content:encoded>
      <category>sql-server</category>
      <category>dotnet</category>
      <category>database</category>
      <category>ssms</category>
      <category>t-sql</category>
      <category>best-practices</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>TypeScript: The Comprehensive Guide — From JavaScript's Quirks to the Go Rewrite</title>
      <link>https://observermagazine.github.io/blog/typescript-comprehensive-guide</link>
      <description>Everything a programmer should know about TypeScript — its history, what JavaScript gets wrong, what TypeScript fixes (and does not fix), every major feature from version 1.0 through 6.0, the complete tsconfig.json reference, the tooling ecosystem, and the historic Go rewrite coming in TypeScript 7.</description>
      <pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/typescript-comprehensive-guide</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>TypeScript is a statically typed superset of JavaScript developed by Microsoft. Every valid JavaScript program is also a valid TypeScript program, but TypeScript adds optional type annotations, interfaces, generics, enums, and a rich compiler infrastructure that catches bugs before your code ever runs. Since its initial release in October 2012, TypeScript has grown from a niche experiment into the most popular language on GitHub, overtaking Python in August 2025 with 2.6 million monthly contributors.</p>
<p>This article is comprehensive by design. We will start with why TypeScript exists by examining the quirks and footguns of JavaScript that motivated its creation. We will walk through every major feature of the type system, explain every significant compiler option in <code>tsconfig.json</code>, trace the evolution of the language from version 1.0 through the just-released 6.0, and look ahead to the historic Go rewrite in TypeScript 7. Whether you are evaluating TypeScript for the first time, preparing to migrate a legacy codebase, or just want to understand the language at a deeper level, this guide is for you.</p>
<h2 id="part-1-why-typescript-exists-javascripts-quirks-and-footguns">Part 1: Why TypeScript Exists — JavaScript's Quirks and Footguns</h2>
<p>To understand TypeScript, you must first understand what JavaScript gets wrong. JavaScript was famously designed in ten days in 1995 by Brendan Eich at Netscape. It has evolved enormously since then, but many of its original design decisions remain baked into the language and cannot be changed without breaking the web.</p>
<h3 id="type-coercion">Type Coercion</h3>
<p>JavaScript is dynamically typed and performs implicit type coercion in ways that surprise almost everyone. When you use the <code>==</code> operator, JavaScript will attempt to convert both operands to the same type before comparing them. This produces results that are logically inconsistent:</p>
<pre><code class="language-javascript">&quot;&quot; == 0          // true
0 == &quot;0&quot;         // true
&quot;&quot; == &quot;0&quot;        // false — transitivity violated

[] == false      // true
[] == ![]        // true — an array equals not-itself

null == undefined // true
null == 0         // false
null == &quot;&quot;        // false
</code></pre>
<p>The <code>+</code> operator is particularly treacherous because it serves double duty as both addition and string concatenation:</p>
<pre><code class="language-javascript">1 + &quot;2&quot;          // &quot;12&quot; — string concatenation
1 - &quot;2&quot;          // -1   — numeric subtraction
&quot;5&quot; - 3          // 2    — numeric subtraction
&quot;5&quot; + 3          // &quot;53&quot; — string concatenation
</code></pre>
<p><strong>What TypeScript does:</strong> TypeScript's type system catches many of these issues at compile time. If you declare a variable as <code>number</code>, the compiler will not let you accidentally concatenate it with a string without an explicit conversion. However, TypeScript does not change JavaScript's runtime behavior. If your types are wrong (because you used <code>any</code> or a type assertion), the coercion still happens at runtime.</p>
<h3 id="the-this-keyword">The <code>this</code> Keyword</h3>
<p>In most object-oriented languages, <code>this</code> always refers to the current instance. In JavaScript, <code>this</code> depends on how a function is called, not where it is defined:</p>
<pre><code class="language-javascript">const obj = {
  name: &quot;Alice&quot;,
  greet() {
    console.log(this.name);
  }
};

obj.greet();          // &quot;Alice&quot;
const fn = obj.greet;
fn();                 // undefined — `this` is now the global object (or undefined in strict mode)

setTimeout(obj.greet, 100); // undefined — same problem
</code></pre>
<p>This is one of the most common sources of bugs in JavaScript, especially in event handlers and callbacks.</p>
<p><strong>What TypeScript does:</strong> TypeScript introduced the <code>this</code> parameter syntax, allowing you to explicitly annotate what <code>this</code> should be inside a function. The compiler will then enforce it:</p>
<pre><code class="language-typescript">interface Obj {
  name: string;
  greet(this: Obj): void;
}
</code></pre>
<p>Arrow functions also help because they lexically capture <code>this</code> from the enclosing scope — and TypeScript understands this.</p>
<h3 id="null-and-undefined"><code>null</code> and <code>undefined</code></h3>
<p>JavaScript has two &quot;nothing&quot; values: <code>null</code> and <code>undefined</code>. They are subtly different: <code>undefined</code> is the default value for uninitialized variables and missing function parameters, while <code>null</code> is an explicit assignment. Yet both are treated as falsy, and <code>typeof null</code> returns <code>&quot;object&quot;</code> (a famous bug from the original implementation that can never be fixed).</p>
<pre><code class="language-javascript">typeof undefined  // &quot;undefined&quot;
typeof null       // &quot;object&quot; — a bug since 1995

let x;
console.log(x);  // undefined
x = null;
console.log(x);  // null
</code></pre>
<p><strong>What TypeScript does:</strong> With the <code>strictNullChecks</code> compiler option (enabled by <code>strict: true</code>), TypeScript treats <code>null</code> and <code>undefined</code> as distinct types that are not assignable to other types. This forces you to explicitly check for null before using a value, which eliminates an entire class of runtime errors.</p>
<h3 id="prototypal-inheritance">Prototypal Inheritance</h3>
<p>JavaScript uses prototypal inheritance rather than classical inheritance. Every object has an internal <code>[[Prototype]]</code> link to another object. The <code>class</code> keyword (introduced in ES2015) is syntactic sugar over this prototype chain. This leads to confusing behavior:</p>
<pre><code class="language-javascript">function Dog(name) {
  this.name = name;
}
Dog.prototype.speak = function() {
  return this.name + &quot; barks&quot;;
};

const d = new Dog(&quot;Rex&quot;);
d.speak();        // &quot;Rex barks&quot;
Dog.speak();      // TypeError — speak is on the prototype, not the constructor
</code></pre>
<p><strong>What TypeScript does:</strong> TypeScript fully supports the <code>class</code> syntax with compile-time enforcement of access modifiers (<code>public</code>, <code>private</code>, <code>protected</code>), abstract classes, and interface implementation. The class is still compiled to prototype-based JavaScript, but the type checker ensures correctness at development time.</p>
<h3 id="equality-and-comparisons">Equality and Comparisons</h3>
<p>JavaScript has two equality operators: <code>==</code> (abstract equality, with coercion) and <code>===</code> (strict equality, no coercion). Virtually every style guide recommends using <code>===</code> exclusively, but <code>==</code> still exists and is still used.</p>
<pre><code class="language-javascript">0 === &quot;&quot;          // false — different types
0 == &quot;&quot;           // true  — coercion

NaN === NaN       // false — NaN is not equal to itself
NaN == NaN        // false — still not equal
</code></pre>
<p><strong>What TypeScript does:</strong> TypeScript does not prevent you from using <code>==</code>, but many TypeScript-adjacent linters (ESLint with <code>@typescript-eslint</code>) can enforce <code>===</code>. The type system helps by flagging comparisons between incompatible types.</p>
<h3 id="floating-point-arithmetic">Floating Point Arithmetic</h3>
<p>JavaScript has only one number type: IEEE 754 double-precision floating point. There are no integers, no decimals, no BigDecimal. This leads to the classic:</p>
<pre><code class="language-javascript">0.1 + 0.2        // 0.30000000000000004
0.1 + 0.2 === 0.3 // false
</code></pre>
<p><strong>What TypeScript does:</strong> TypeScript does not fix this. The <code>number</code> type is still a 64-bit float. However, TypeScript does support the <code>bigint</code> type (introduced in ES2020 and TypeScript 3.2), which provides arbitrary-precision integers. For decimal arithmetic, you still need a library.</p>
<h3 id="variable-hoisting-and-scoping">Variable Hoisting and Scoping</h3>
<p>Before ES2015, JavaScript only had function-scoped variables declared with <code>var</code>. Variables declared with <code>var</code> are &quot;hoisted&quot; to the top of their function, which means they exist before the line where they are declared:</p>
<pre><code class="language-javascript">console.log(x);  // undefined — not a ReferenceError!
var x = 5;

for (var i = 0; i &lt; 3; i++) {
  setTimeout(() =&gt; console.log(i), 100);
}
// prints 3, 3, 3 — not 0, 1, 2
</code></pre>
<p>ES2015 introduced <code>let</code> and <code>const</code> with block scoping, which fixes most of these issues.</p>
<p><strong>What TypeScript does:</strong> TypeScript supports <code>let</code> and <code>const</code> (and always has). When targeting older JavaScript versions, the compiler can down-level <code>let</code> and <code>const</code> to <code>var</code> with appropriate transformations. TypeScript also flags many hoisting-related bugs through control flow analysis.</p>
<h3 id="other-quirks-worth-knowing">Other Quirks Worth Knowing</h3>
<p>There are many more JavaScript quirks that TypeScript developers should be aware of:</p>
<p>The <code>arguments</code> object is not a real array. It is array-like but lacks array methods like <code>map</code> and <code>filter</code>. TypeScript discourages its use and encourages rest parameters (<code>...args</code>) instead.</p>
<p><code>typeof</code> is unreliable for complex types: <code>typeof []</code> returns <code>&quot;object&quot;</code>, <code>typeof null</code> returns <code>&quot;object&quot;</code>, and <code>typeof NaN</code> returns <code>&quot;number&quot;</code>.</p>
<p>Automatic semicolon insertion (ASI) means JavaScript sometimes inserts semicolons where you did not intend them, leading to subtle bugs:</p>
<pre><code class="language-javascript">function foo() {
  return
    { bar: 42 };
}
foo(); // undefined — JS inserted a semicolon after return
</code></pre>
<p>JavaScript objects are not hash maps. They have a prototype chain, so properties like <code>constructor</code>, <code>toString</code>, and <code>__proto__</code> exist on every object. Using <code>Map</code> is safer for key-value storage.</p>
<h2 id="part-2-typescripts-type-system-the-fundamentals">Part 2: TypeScript's Type System — The Fundamentals</h2>
<p>Now that we understand what JavaScript gets wrong, let us look at how TypeScript's type system works.</p>
<h3 id="basic-types">Basic Types</h3>
<p>TypeScript provides types for all of JavaScript's primitives and adds a few of its own:</p>
<pre><code class="language-typescript">let isDone: boolean = false;
let decimal: number = 6;
let hex: number = 0xf00d;
let binary: number = 0b1010;
let octal: number = 0o744;
let big: bigint = 100n;
let color: string = &quot;blue&quot;;
let nothing: null = null;
let notDefined: undefined = undefined;
let sym: symbol = Symbol(&quot;key&quot;);
</code></pre>
<p>TypeScript also has several types that do not exist in JavaScript:</p>
<p><code>any</code> — Opts out of type checking entirely. Any value can be assigned to <code>any</code>, and <code>any</code> can be assigned to anything. Using <code>any</code> defeats the purpose of TypeScript and should be avoided.</p>
<p><code>unknown</code> — The type-safe counterpart to <code>any</code>. You can assign any value to <code>unknown</code>, but you cannot do anything with an <code>unknown</code> value without first narrowing its type through a type guard. Introduced in TypeScript 3.0.</p>
<p><code>void</code> — The return type of functions that do not return a value.</p>
<p><code>never</code> — The type of values that never occur. A function that always throws an exception or has an infinite loop has return type <code>never</code>. It is also used for exhaustiveness checking.</p>
<h3 id="arrays-and-tuples">Arrays and Tuples</h3>
<p>Arrays can be typed in two equivalent ways:</p>
<pre><code class="language-typescript">let list1: number[] = [1, 2, 3];
let list2: Array&lt;number&gt; = [1, 2, 3];
</code></pre>
<p>Tuples are fixed-length arrays where each element has a known type:</p>
<pre><code class="language-typescript">let pair: [string, number] = [&quot;hello&quot;, 42];
let first: string = pair[0];
let second: number = pair[1];
</code></pre>
<p>TypeScript 4.0 introduced variadic tuple types, allowing you to spread tuple types and create complex type-level operations on tuples. TypeScript 4.2 added rest elements in the middle of tuples, and labeled tuple elements for documentation:</p>
<pre><code class="language-typescript">type NamedPoint = [x: number, y: number, z: number];
type Head&lt;T extends any[]&gt; = T extends [infer H, ...any[]] ? H : never;
</code></pre>
<h3 id="interfaces-and-type-aliases">Interfaces and Type Aliases</h3>
<p>Interfaces describe the shape of objects:</p>
<pre><code class="language-typescript">interface User {
  name: string;
  age: number;
  email?: string;          // optional
  readonly id: number;     // cannot be modified after creation
}
</code></pre>
<p>Type aliases can describe the same shapes, plus unions, intersections, primitives, tuples, and more:</p>
<pre><code class="language-typescript">type StringOrNumber = string | number;
type Point = { x: number; y: number };
type Result&lt;T&gt; = { success: true; data: T } | { success: false; error: string };
</code></pre>
<p>The practical difference between interfaces and type aliases has narrowed over the years. Interfaces can be extended with <code>extends</code> and merged across declarations (declaration merging). Type aliases can represent unions, intersections, conditional types, and mapped types. For object shapes, either works. For everything else, use type aliases.</p>
<h3 id="enums">Enums</h3>
<p>TypeScript provides several kinds of enums:</p>
<pre><code class="language-typescript">// Numeric enum — auto-incremented from 0
enum Direction {
  Up,      // 0
  Down,    // 1
  Left,    // 2
  Right,   // 3
}

// String enum — each member must be initialized
enum Color {
  Red = &quot;RED&quot;,
  Green = &quot;GREEN&quot;,
  Blue = &quot;BLUE&quot;,
}

// Const enum — inlined at compile time, no runtime object
const enum Status {
  Active = &quot;ACTIVE&quot;,
  Inactive = &quot;INACTIVE&quot;,
}
</code></pre>
<p>Enums are one of the few TypeScript features that have runtime semantics — they generate JavaScript code (unless they are <code>const</code> enums). This is important because Node.js's type-stripping mode (<code>--experimental-strip-types</code>) cannot handle constructs with runtime semantics. TypeScript 5.8 introduced the <code>--erasableSyntaxOnly</code> flag to enforce that your code uses only syntax that can be erased without changing behavior.</p>
<p>Many TypeScript developers avoid enums entirely and use string literal unions instead:</p>
<pre><code class="language-typescript">type Direction = &quot;up&quot; | &quot;down&quot; | &quot;left&quot; | &quot;right&quot;;
</code></pre>
<p>This approach has no runtime overhead and works with type stripping.</p>
<h3 id="union-and-intersection-types">Union and Intersection Types</h3>
<p>Union types represent a value that can be one of several types:</p>
<pre><code class="language-typescript">function formatId(id: string | number): string {
  if (typeof id === &quot;string&quot;) {
    return id.toUpperCase();
  }
  return id.toString();
}
</code></pre>
<p>Intersection types combine multiple types into one:</p>
<pre><code class="language-typescript">type Timestamped = { createdAt: Date; updatedAt: Date };
type Named = { name: string };
type TimestampedUser = Named &amp; Timestamped;
</code></pre>
<h3 id="literal-types-and-narrowing">Literal Types and Narrowing</h3>
<p>TypeScript can narrow types to specific literal values:</p>
<pre><code class="language-typescript">type HttpMethod = &quot;GET&quot; | &quot;POST&quot; | &quot;PUT&quot; | &quot;DELETE&quot;;

function request(method: HttpMethod, url: string): void {
  // method is constrained to exactly these four strings
}
</code></pre>
<p>TypeScript performs control flow analysis to narrow types within conditional blocks:</p>
<pre><code class="language-typescript">function example(x: string | number | null) {
  if (x === null) {
    // x is null here
    return;
  }
  if (typeof x === &quot;string&quot;) {
    // x is string here
    console.log(x.toUpperCase());
  } else {
    // x is number here
    console.log(x.toFixed(2));
  }
}
</code></pre>
<p>This narrowing works with <code>typeof</code>, <code>instanceof</code>, <code>in</code>, equality checks, truthiness checks, and user-defined type guards.</p>
<h3 id="type-guards-and-type-predicates">Type Guards and Type Predicates</h3>
<p>You can define custom type guards using the <code>is</code> keyword:</p>
<pre><code class="language-typescript">interface Fish { swim(): void }
interface Bird { fly(): void }

function isFish(pet: Fish | Bird): pet is Fish {
  return (pet as Fish).swim !== undefined;
}

function move(pet: Fish | Bird) {
  if (isFish(pet)) {
    pet.swim(); // TypeScript knows pet is Fish
  } else {
    pet.fly();  // TypeScript knows pet is Bird
  }
}
</code></pre>
<p>TypeScript 5.5 introduced inferred type predicates, where the compiler can automatically infer <code>x is T</code> return types for simple guard functions without you writing the annotation explicitly.</p>
<h3 id="the-satisfies-operator">The <code>satisfies</code> Operator</h3>
<p>Introduced in TypeScript 4.9, <code>satisfies</code> lets you validate that an expression matches a type without widening it:</p>
<pre><code class="language-typescript">type Colors = Record&lt;string, [number, number, number] | string&gt;;

const palette = {
  red: [255, 0, 0],
  green: &quot;#00ff00&quot;,
  blue: [0, 0, 255],
} satisfies Colors;

// palette.red is still [number, number, number], not string | [number, number, number]
palette.red.map(c =&gt; c * 2); // works — type is preserved
</code></pre>
<p>Without <code>satisfies</code>, annotating the variable as <code>Colors</code> would widen each property to <code>string | [number, number, number]</code>, losing the specific type information.</p>
<h2 id="part-3-advanced-type-system-features">Part 3: Advanced Type System Features</h2>
<p>TypeScript has one of the most sophisticated type systems of any mainstream language. This section covers the advanced features that enable complex type-level programming.</p>
<h3 id="generics">Generics</h3>
<p>Generics let you write functions, classes, and types that work with any type while preserving type information:</p>
<pre><code class="language-typescript">function identity&lt;T&gt;(arg: T): T {
  return arg;
}

let output = identity(&quot;hello&quot;); // output is string
let num = identity(42);          // num is number
</code></pre>
<p>You can constrain generics with <code>extends</code>:</p>
<pre><code class="language-typescript">function getLength&lt;T extends { length: number }&gt;(arg: T): number {
  return arg.length;
}

getLength(&quot;hello&quot;);     // 5
getLength([1, 2, 3]);   // 3
getLength(42);           // Error — number doesn't have length
</code></pre>
<p>Generic defaults let you provide fallback types:</p>
<pre><code class="language-typescript">interface ApiResponse&lt;T = unknown&gt; {
  data: T;
  status: number;
}
</code></pre>
<h3 id="conditional-types">Conditional Types</h3>
<p>Conditional types select one of two types based on a condition:</p>
<pre><code class="language-typescript">type IsString&lt;T&gt; = T extends string ? true : false;

type A = IsString&lt;&quot;hello&quot;&gt;;  // true
type B = IsString&lt;42&gt;;       // false
</code></pre>
<p>The <code>infer</code> keyword lets you extract types within conditional types:</p>
<pre><code class="language-typescript">type ReturnType&lt;T&gt; = T extends (...args: any[]) =&gt; infer R ? R : never;
type ArrayElement&lt;T&gt; = T extends (infer E)[] ? E : never;

type R = ReturnType&lt;() =&gt; string&gt;;     // string
type E = ArrayElement&lt;number[]&gt;;       // number
</code></pre>
<p>Conditional types distribute over unions:</p>
<pre><code class="language-typescript">type ToArray&lt;T&gt; = T extends any ? T[] : never;
type Distributed = ToArray&lt;string | number&gt;; // string[] | number[]
</code></pre>
<h3 id="mapped-types">Mapped Types</h3>
<p>Mapped types create new types by transforming each property of an existing type:</p>
<pre><code class="language-typescript">type Readonly&lt;T&gt; = { readonly [K in keyof T]: T[K] };
type Partial&lt;T&gt; = { [K in keyof T]?: T[K] };
type Required&lt;T&gt; = { [K in keyof T]-?: T[K] };

// Key remapping (TypeScript 4.1)
type Getters&lt;T&gt; = {
  [K in keyof T as `get${Capitalize&lt;string &amp; K&gt;}`]: () =&gt; T[K]
};

interface Person { name: string; age: number; }
type PersonGetters = Getters&lt;Person&gt;;
// { getName: () =&gt; string; getAge: () =&gt; number }
</code></pre>
<h3 id="template-literal-types">Template Literal Types</h3>
<p>Introduced in TypeScript 4.1, template literal types let you build string types from other types:</p>
<pre><code class="language-typescript">type EventName = `${&quot;click&quot; | &quot;focus&quot; | &quot;blur&quot;}${&quot;&quot; | &quot;Capture&quot;}`;
// &quot;click&quot; | &quot;clickCapture&quot; | &quot;focus&quot; | &quot;focusCapture&quot; | &quot;blur&quot; | &quot;blurCapture&quot;

type PropEventSource&lt;T&gt; = {
  on&lt;K extends string &amp; keyof T&gt;(
    eventName: `${K}Changed`,
    callback: (newValue: T[K]) =&gt; void
  ): void;
};
</code></pre>
<p>TypeScript provides built-in string manipulation types: <code>Uppercase</code>, <code>Lowercase</code>, <code>Capitalize</code>, and <code>Uncapitalize</code>.</p>
<h3 id="utility-types">Utility Types</h3>
<p>TypeScript ships with a rich set of built-in utility types:</p>
<p><code>Partial&lt;T&gt;</code> makes all properties optional. <code>Required&lt;T&gt;</code> makes all properties required. <code>Readonly&lt;T&gt;</code> makes all properties read-only. <code>Record&lt;K, T&gt;</code> creates an object type with keys of type K and values of type T. <code>Pick&lt;T, K&gt;</code> selects a subset of properties from T. <code>Omit&lt;T, K&gt;</code> removes properties from T. <code>Exclude&lt;T, U&gt;</code> removes types from a union. <code>Extract&lt;T, U&gt;</code> extracts types from a union. <code>NonNullable&lt;T&gt;</code> removes <code>null</code> and <code>undefined</code>. <code>ReturnType&lt;T&gt;</code> extracts a function's return type. <code>Parameters&lt;T&gt;</code> extracts a function's parameter types as a tuple. <code>ConstructorParameters&lt;T&gt;</code> extracts constructor parameters. <code>InstanceType&lt;T&gt;</code> extracts the instance type of a constructor. <code>Awaited&lt;T&gt;</code> unwraps a Promise (introduced in TypeScript 4.5). <code>NoInfer&lt;T&gt;</code> prevents inference on a type parameter (introduced in TypeScript 5.4).</p>
<h3 id="discriminated-unions">Discriminated Unions</h3>
<p>Also called tagged unions, discriminated unions are one of the most powerful patterns in TypeScript:</p>
<pre><code class="language-typescript">type Shape =
  | { kind: &quot;circle&quot;; radius: number }
  | { kind: &quot;rectangle&quot;; width: number; height: number }
  | { kind: &quot;triangle&quot;; base: number; height: number };

function area(shape: Shape): number {
  switch (shape.kind) {
    case &quot;circle&quot;:
      return Math.PI * shape.radius ** 2;
    case &quot;rectangle&quot;:
      return shape.width * shape.height;
    case &quot;triangle&quot;:
      return (shape.base * shape.height) / 2;
  }
}
</code></pre>
<p>TypeScript narrows the type in each <code>case</code> branch, giving you access to the properties specific to that variant. If you add a new variant to the union and forget to handle it, you can use the <code>never</code> type for exhaustiveness checking:</p>
<pre><code class="language-typescript">function assertNever(x: never): never {
  throw new Error(`Unexpected value: ${x}`);
}

// Add default: return assertNever(shape); to catch unhandled cases
</code></pre>
<h3 id="using-and-explicit-resource-management"><code>using</code> and Explicit Resource Management</h3>
<p>TypeScript 5.2 added support for the TC39 Explicit Resource Management proposal (the <code>using</code> keyword):</p>
<pre><code class="language-typescript">function processFile() {
  using file = openFile(&quot;data.txt&quot;);
  // file is automatically disposed when the block exits
  return file.read();
} // file[Symbol.dispose]() called here

async function processStream() {
  await using stream = openStream(&quot;data.txt&quot;);
  // stream is automatically disposed asynchronously
  return await stream.read();
} // stream[Symbol.asyncDispose]() called here
</code></pre>
<p>This is similar to C#'s <code>using</code> statement or Python's <code>with</code> statement. It ensures resources like file handles, database connections, and locks are properly cleaned up.</p>
<h3 id="decorators">Decorators</h3>
<p>TypeScript has long supported experimental decorators (the legacy syntax), but TypeScript 5.0 introduced support for the TC39 Stage 3 decorators proposal, which has a different API:</p>
<pre><code class="language-typescript">function logged(originalMethod: any, context: ClassMethodDecoratorContext) {
  const methodName = String(context.name);
  function replacementMethod(this: any, ...args: any[]) {
    console.log(`Calling ${methodName}`);
    const result = originalMethod.call(this, ...args);
    console.log(`${methodName} returned ${result}`);
    return result;
  }
  return replacementMethod;
}

class Calculator {
  @logged
  add(a: number, b: number): number {
    return a + b;
  }
}
</code></pre>
<p>TypeScript 5.9 stabilized the TC39 Decorator Metadata proposal, enabling frameworks to build richer metadata-driven APIs.</p>
<h3 id="const-type-parameters"><code>const</code> Type Parameters</h3>
<p>Introduced in TypeScript 5.0, the <code>const</code> modifier on type parameters infers literal types instead of their widened base types:</p>
<pre><code class="language-typescript">function routes&lt;const T extends readonly string[]&gt;(paths: T): T {
  return paths;
}

const r = routes([&quot;home&quot;, &quot;about&quot;, &quot;contact&quot;]);
// r is readonly [&quot;home&quot;, &quot;about&quot;, &quot;contact&quot;], not string[]
</code></pre>
<h3 id="variance-annotations">Variance Annotations</h3>
<p>TypeScript 4.7 introduced explicit variance annotations for type parameters: <code>in</code> for contravariance and <code>out</code> for covariance:</p>
<pre><code class="language-typescript">interface Producer&lt;out T&gt; {
  produce(): T;
}

interface Consumer&lt;in T&gt; {
  consume(value: T): void;
}
</code></pre>
<p>These annotations help TypeScript check assignability more efficiently and catch variance errors at the declaration site rather than at usage sites.</p>
<h2 id="part-4-the-tsconfig.json-reference">Part 4: The tsconfig.json Reference</h2>
<p>The <code>tsconfig.json</code> file controls how the TypeScript compiler behaves. It contains hundreds of options organized into several categories. Here is a comprehensive reference of the most important ones.</p>
<h3 id="project-configuration">Project Configuration</h3>
<p><code>files</code> specifies an explicit list of files to include. <code>include</code> uses glob patterns to match files. <code>exclude</code> removes files from the <code>include</code> set. <code>extends</code> inherits configuration from another tsconfig file. <code>references</code> declares project references for composite builds.</p>
<h3 id="target-and-output">Target and Output</h3>
<p><code>target</code> specifies the ECMAScript version for the output JavaScript. Valid values include <code>es5</code>, <code>es6</code>/<code>es2015</code>, <code>es2016</code> through <code>es2025</code>, and <code>esnext</code>. As of TypeScript 6.0, the default is <code>es2025</code> and ES5 is deprecated. <code>module</code> specifies the module system for the output: <code>commonjs</code>, <code>esnext</code>, <code>nodenext</code>, <code>preserve</code>, and others. As of TypeScript 6.0, the default is <code>esnext</code>. The legacy values <code>amd</code>, <code>umd</code>, and <code>systemjs</code> are deprecated. <code>lib</code> specifies which built-in type declarations to include: <code>dom</code>, <code>dom.iterable</code>, <code>es2015</code> through <code>es2025</code>, <code>esnext</code>, and specific feature libraries like <code>es2015.promise</code>. <code>outDir</code> specifies the output directory for compiled files. <code>outFile</code> concatenated all output into a single file but has been removed in TypeScript 6.0 — use a bundler instead. <code>rootDir</code> specifies the root directory of source files, controlling the output directory structure. <code>declaration</code> generates <code>.d.ts</code> declaration files alongside JavaScript output. <code>declarationDir</code> specifies a separate output directory for declaration files. <code>declarationMap</code> generates source maps for declaration files, enabling &quot;go to source&quot; in editors. <code>sourceMap</code> generates <code>.map</code> files for debugging. <code>inlineSourceMap</code> embeds source maps inside the generated JavaScript. <code>inlineSources</code> embeds the TypeScript source inside the source map. <code>removeComments</code> strips comments from the output. <code>noEmit</code> runs type checking without generating any output files. <code>emitDeclarationOnly</code> only emits <code>.d.ts</code> files, no JavaScript.</p>
<h3 id="module-resolution">Module Resolution</h3>
<p><code>moduleResolution</code> controls how TypeScript finds modules. The values are <code>node16</code>/<code>nodenext</code> (follows Node.js resolution rules including <code>exports</code> in package.json), <code>bundler</code> (designed for use with Vite, Webpack, esbuild, and similar tools), and the legacy <code>node</code> (which is deprecated in TypeScript 6.0 as <code>node10</code>). <code>baseUrl</code> sets a base directory for non-relative module imports. Deprecated in TypeScript 6.0 — use <code>paths</code> instead. <code>paths</code> maps import specifiers to file locations. Only affects TypeScript's type checking, not the emitted JavaScript. <code>resolveJsonModule</code> allows importing <code>.json</code> files and generates types from their structure. <code>allowImportingTsExtensions</code> allows imports to include <code>.ts</code>, <code>.mts</code>, and <code>.cts</code> extensions. Requires <code>noEmit</code> or <code>emitDeclarationOnly</code>. <code>verbatimModuleSyntax</code> enforces that imports and exports are written exactly as they will be emitted — no transformation. If a <code>require</code> would be emitted, you must write <code>require</code>. If an <code>import</code> would be emitted, you must write <code>import</code>. <code>moduleDetection</code> controls how TypeScript detects whether a file is a module or script. The value <code>force</code> treats all files as modules. <code>esModuleInterop</code> enables compatible interop between CommonJS and ES modules by generating helper functions. <code>allowSyntheticDefaultImports</code> allows default imports from modules that do not have a default export, for type-checking purposes only. <code>isolatedModules</code> ensures each file can be safely processed in isolation (as transpilers like Babel and SWC do). <code>isolatedDeclarations</code> ensures each file can generate its own declaration file without requiring type information from other files. Useful for parallel declaration emit in large projects. Introduced in TypeScript 5.5.</p>
<h3 id="strict-type-checking">Strict Type Checking</h3>
<p><code>strict</code> is an umbrella flag that enables all strict type-checking options. As of TypeScript 6.0, this defaults to <code>true</code>. The individual flags it controls are:</p>
<p><code>noImplicitAny</code> errors when a type would be inferred as <code>any</code>. <code>strictNullChecks</code> makes <code>null</code> and <code>undefined</code> their own types that are not assignable to other types. <code>strictFunctionTypes</code> enables contravariant checking of function parameter types. <code>strictBindCallApply</code> enables stricter checking of <code>bind</code>, <code>call</code>, and <code>apply</code>. <code>strictPropertyInitialization</code> requires class properties to be initialized in the constructor or marked as optional. <code>noImplicitThis</code> errors when <code>this</code> has an implicit <code>any</code> type. <code>alwaysStrict</code> emits <code>&quot;use strict&quot;</code> in every output file — deprecated in TypeScript 6.0, as all code is now assumed to be in strict mode. <code>useUnknownInCatchVariables</code> makes <code>catch</code> clause variables <code>unknown</code> instead of <code>any</code>.</p>
<h3 id="additional-strictness">Additional Strictness</h3>
<p>These flags are not part of <code>strict</code> but are commonly used:</p>
<p><code>noUncheckedIndexedAccess</code> adds <code>undefined</code> to the type of indexed access expressions (array elements, object property access by index). Highly recommended. <code>noImplicitOverride</code> requires the <code>override</code> keyword when overriding a base class method. <code>noPropertyAccessFromIndexSignature</code> forces bracket notation for properties that come from an index signature. <code>exactOptionalPropertyTypes</code> distinguishes between a property being <code>undefined</code> and a property being missing entirely. <code>noImplicitReturns</code> errors if a function has code paths that do not return a value. <code>noFallthroughCasesInSwitch</code> errors on fallthrough cases in switch statements. <code>noUnusedLocals</code> errors on unused local variables. <code>noUnusedParameters</code> errors on unused function parameters. <code>erasableSyntaxOnly</code> ensures that all TypeScript-specific syntax can be removed without changing runtime behavior — required for Node.js's type-stripping mode. Introduced in TypeScript 5.8.</p>
<h3 id="build-performance">Build Performance</h3>
<p><code>skipLibCheck</code> skips type-checking of <code>.d.ts</code> files. This is recommended for most projects because checking all of <code>node_modules</code> is slow and usually unnecessary. <code>forceConsistentCasingInFileNames</code> prevents case-sensitivity issues that cause problems on case-sensitive file systems (like Linux in CI). <code>incremental</code> saves compilation state to a <code>.tsbuildinfo</code> file and reuses it on subsequent builds. <code>composite</code> enables project references and forces certain options that enable incremental builds across multiple projects. <code>tsBuildInfoFile</code> specifies the location of the <code>.tsbuildinfo</code> file. <code>disableSourceOfProjectReferenceRedirect</code> uses declaration files instead of source files for referenced projects, improving build speed.</p>
<h3 id="other-notable-options">Other Notable Options</h3>
<p><code>jsx</code> controls how JSX is transformed. Values include <code>react</code> (transforms to <code>React.createElement</code>), <code>react-jsx</code> (transforms to the new JSX runtime), <code>react-jsxdev</code>, <code>preserve</code> (keeps JSX in the output), and <code>react-native</code>. <code>allowJs</code> allows JavaScript files in the TypeScript compilation. <code>checkJs</code> type-checks JavaScript files (requires <code>allowJs</code>). <code>maxNodeModuleJsDepth</code> controls how deep into <code>node_modules</code> TypeScript looks when checking JavaScript files. <code>plugins</code> specifies TypeScript language service plugins. <code>types</code> limits which <code>@types</code> packages are automatically included. An empty array <code>[]</code> disables automatic inclusion. As of TypeScript 6.0, <code>types</code> defaults to <code>[]</code>, meaning you must explicitly list the <code>@types</code> packages you need. <code>typeRoots</code> specifies directories to search for type declarations. <code>downlevelIteration</code> enables full support for iterables when targeting older JavaScript versions — deprecated in TypeScript 6.0. <code>importHelpers</code> imports helper functions from <code>tslib</code> instead of inlining them. <code>libReplacement</code> controls whether TypeScript looks for replacement lib packages like <code>@typescript/lib-dom</code>. Introduced in TypeScript 5.8, defaults to <code>false</code> in TypeScript 6.0.</p>
<h3 id="typescript-6.0-default-changes">TypeScript 6.0 Default Changes</h3>
<p>TypeScript 6.0 changed many defaults to reflect the modern ecosystem. Here is what changed:</p>
<p><code>strict</code> now defaults to <code>true</code>. <code>module</code> now defaults to <code>esnext</code>. <code>target</code> now defaults to <code>es2025</code>. <code>noUncheckedSideEffectImports</code> now defaults to <code>true</code>. <code>libReplacement</code> now defaults to <code>false</code>. <code>rootDir</code> now defaults to <code>.</code> (the tsconfig directory). <code>types</code> now defaults to <code>[]</code>.</p>
<p>You can temporarily suppress deprecation warnings by adding <code>&quot;ignoreDeprecations&quot;: &quot;6.0&quot;</code> to your tsconfig, but these deprecated options will be removed entirely in TypeScript 7.0.</p>
<h2 id="part-5-version-history-from-typescript-1.0-to-6.0">Part 5: Version History — From TypeScript 1.0 to 6.0</h2>
<h3 id="typescript-1.0-april-2014">TypeScript 1.0 (April 2014)</h3>
<p>The first stable release. It established the core language: type annotations, interfaces, classes, modules, generics, and enums. It was designed to be a strict superset of JavaScript with optional types.</p>
<h3 id="typescript-2.x-20162017">TypeScript 2.x (2016–2017)</h3>
<p>TypeScript 2.0 introduced <code>strictNullChecks</code>, discriminated unions, the <code>never</code> type, and control flow-based type analysis. These features fundamentally transformed how TypeScript code is written.</p>
<p>TypeScript 2.1 added <code>keyof</code> and mapped types, enabling type-level programming for the first time. <code>Partial</code>, <code>Readonly</code>, <code>Record</code>, and <code>Pick</code> became possible.</p>
<p>TypeScript 2.2 added the <code>object</code> type (distinct from <code>Object</code>).</p>
<p>TypeScript 2.3 added <code>--strict</code> as an umbrella flag and introduced generic defaults.</p>
<p>TypeScript 2.4 added string enums.</p>
<p>TypeScript 2.8 introduced conditional types and the <code>infer</code> keyword — arguably the most transformative addition to the type system since generics.</p>
<p>TypeScript 2.9 added <code>import()</code> types for dynamic imports.</p>
<h3 id="typescript-3.x-20182020">TypeScript 3.x (2018–2020)</h3>
<p>TypeScript 3.0 introduced the <code>unknown</code> type, project references (for monorepo builds), and rest elements in tuple types.</p>
<p>TypeScript 3.1 added mapped types on tuples and arrays.</p>
<p>TypeScript 3.2 added <code>bigint</code> support.</p>
<p>TypeScript 3.4 introduced <code>const</code> assertions (<code>as const</code>) for creating deeply readonly literal types.</p>
<p>TypeScript 3.7 added optional chaining (<code>?.</code>), nullish coalescing (<code>??</code>), assertion functions, and recursive type aliases.</p>
<p>TypeScript 3.8 added <code>import type</code> and <code>export type</code> for type-only imports and exports, along with <code>#private</code> fields (ECMAScript private fields).</p>
<p>TypeScript 3.9 focused on performance improvements.</p>
<h3 id="typescript-4.x-20202023">TypeScript 4.x (2020–2023)</h3>
<p>TypeScript 4.0 introduced variadic tuple types and labeled tuple elements.</p>
<p>TypeScript 4.1 added template literal types and key remapping in mapped types — enabling string manipulation at the type level.</p>
<p>TypeScript 4.2 added rest elements in the middle of tuples.</p>
<p>TypeScript 4.3 added <code>override</code> keyword and template literal expression types.</p>
<p>TypeScript 4.4 added control flow analysis for aliased conditions and discriminants.</p>
<p>TypeScript 4.5 added the <code>Awaited</code> type, <code>import</code> assertions, and ES module support for Node.js.</p>
<p>TypeScript 4.7 added variance annotations (<code>in</code>/<code>out</code>), <code>moduleSuffixes</code>, and <code>--module nodenext</code>.</p>
<p>TypeScript 4.8 improved narrowing for <code>{}</code> and <code>unknown</code>.</p>
<p>TypeScript 4.9 introduced the <code>satisfies</code> operator.</p>
<h3 id="typescript-5.x-20232025">TypeScript 5.x (2023–2025)</h3>
<p>TypeScript 5.0 was a massive release. It added TC39 Stage 3 decorators (replacing the legacy experimental decorators), <code>const</code> type parameters, enum improvements, <code>--moduleResolution bundler</code>, and migrated the codebase from internal namespaces to ES modules, reducing the npm package size by 58%.</p>
<p>TypeScript 5.1 added easier implicit returns for <code>undefined</code>-returning functions and unrelated types for getters and setters.</p>
<p>TypeScript 5.2 introduced <code>using</code> declarations (explicit resource management), decorator metadata, and named/anonymous tuple elements.</p>
<p>TypeScript 5.3 added <code>import</code> attributes, narrowing within <code>switch (true)</code>, and <code>--resolution-mode</code> in import types.</p>
<p>TypeScript 5.4 introduced the <code>NoInfer</code> utility type, improved type narrowing in closures, and new <code>Object.groupBy</code> and <code>Map.groupBy</code> types.</p>
<p>TypeScript 5.5 introduced inferred type predicates (the compiler can automatically infer <code>x is T</code>), regex syntax checking, <code>isolatedDeclarations</code>, and an improved editor experience.</p>
<p>TypeScript 5.6 added disallowed nullish and truthy checks (flagging expressions that are always truthy or always nullish in conditions), iterator helper types, and the <code>--noUncheckedSideEffectImports</code> flag.</p>
<p>TypeScript 5.7 improved detection of never-initialized variables, added ES2024 target support with <code>Object.groupBy</code> and <code>Map.groupBy</code> types, and the <code>--rewriteRelativeImportExtensions</code> flag for direct TypeScript execution.</p>
<p>TypeScript 5.8 added the <code>--erasableSyntaxOnly</code> flag for compatibility with Node.js type stripping, the <code>--libReplacement</code> flag, granular return type checks for conditional expressions, <code>--module nodenext</code> support for <code>require()</code> of ESM, and <code>--module node18</code> for stable Node.js 18 resolution. This was the last TypeScript 5.x with significant new features, as the team had begun work on the Go rewrite.</p>
<p>TypeScript 5.9 (August 2025) added <code>import defer</code> for deferred module evaluation, expandable hover tooltips in editors, a redesigned <code>tsc --init</code> command, configurable hover length, and significant performance improvements through type instantiation caching. This was the final TypeScript 5.x release.</p>
<h3 id="typescript-6.0-march-2026">TypeScript 6.0 (March 2026)</h3>
<p>TypeScript 6.0 is a &quot;bridge release&quot; — the last version of the compiler written in JavaScript, designed to prepare the ecosystem for TypeScript 7.0's Go rewrite. It makes sweeping changes to defaults and removes legacy options.</p>
<p>New defaults: <code>strict: true</code>, <code>module: esnext</code>, <code>target: es2025</code>, <code>types: []</code>, <code>rootDir: .</code>. This means every new TypeScript project is strict by default, targets modern JavaScript, and does not automatically include <code>@types</code> packages.</p>
<p>Deprecations and removals: <code>target: es5</code> is deprecated. <code>--outFile</code> is removed. <code>--baseUrl</code> (without <code>paths</code>) is deprecated. <code>--moduleResolution node10</code>/<code>classic</code> is deprecated. Module formats <code>amd</code>, <code>umd</code>, and <code>systemjs</code> are deprecated. <code>alwaysStrict: false</code> is deprecated because all code is assumed strict.</p>
<p>New features: Temporal API types (the <code>Temporal</code> global is now in the standard library, reflecting its Stage 4 status in TC39), <code>Map.getOrInsert</code> and <code>Map.getOrInsertComputed</code> types from the &quot;upsert&quot; proposal, improved type inference for methods (less context-sensitivity on <code>this</code>-less functions), <code>#/</code> subpath imports, <code>es2025</code> target and lib, and <code>--stableTypeOrdering</code> to preview the deterministic type ordering that will be the default in TypeScript 7.0.</p>
<p>The <code>ignoreDeprecations: &quot;6.0&quot;</code> escape hatch allows teams to suppress deprecation warnings during migration, but TypeScript 7.0 will not support any of the deprecated options. A <code>ts5to6</code> migration tool can automate configuration adjustments for <code>baseUrl</code> and <code>rootDir</code>.</p>
<h3 id="typescript-7.0-upcoming-2026">TypeScript 7.0 (Upcoming, 2026)</h3>
<p>TypeScript 7.0 is the single most ambitious change in TypeScript's history: a complete rewrite of the compiler and language service in Go, codenamed Project Corsa. The project was announced by Anders Hejlsberg in March 2025 and has been progressing rapidly ever since.</p>
<p>The new compiler, called <code>tsgo</code>, is a drop-in replacement for <code>tsc</code>. It uses Go's native compilation and goroutines for parallel type checking. The performance improvements are dramatic: the VS Code codebase (1.5 million lines of TypeScript) compiles in 8.74 seconds with <code>tsgo</code> compared to 89 seconds with <code>tsc</code> — a 10.2x speedup. The Sentry project dropped from 133 seconds to 16 seconds. Memory usage drops roughly 2.9x.</p>
<p>Why Go instead of Rust? The TypeScript team explained that Go's garbage collector and memory model map more closely to TypeScript's existing data structures. The compiler was designed around mutable shared state, and Rust's ownership model would have required fundamental architectural changes. Go allowed a relatively faithful port while achieving native speed.</p>
<p>The language itself does not change. The same TypeScript code, the same type system, the same errors. The differences are in the tooling: <code>tsgo</code> uses the Language Server Protocol (LSP) instead of the proprietary TSServer protocol, which means editor integrations need to be updated. Custom plugins and transformers that patch TypeScript internals may not work. All deprecated options from 6.0 become hard removals.</p>
<p>As of March 2026, the <code>tsgo</code> CLI is available as <code>@typescript/native-preview</code> on npm. A VS Code extension provides the Go-based language service for daily use. Type checking is described as &quot;very nearly complete,&quot; with remaining mismatches down to known incomplete work or intentional behavior changes. Full emit (generating <code>.js</code> and <code>.d.ts</code> files) is still in progress. The stable TypeScript 7.0 release is targeting mid-2026.</p>
<p>The ecosystem implications are significant. Tools built on the TSServer protocol (many editor extensions, linting integrations) need to migrate to LSP. Custom TypeScript transformers need new APIs. The <code>--baseUrl</code> and other deprecated options simply will not exist. But for most teams, the migration is straightforward: install the new package, run <code>tsgo</code> alongside <code>tsc</code> to verify identical results, then switch.</p>
<h2 id="part-6-the-tooling-ecosystem">Part 6: The Tooling Ecosystem</h2>
<p>TypeScript does not exist in isolation. A rich ecosystem of tools has grown around it.</p>
<h3 id="build-tools-and-transpilers">Build Tools and Transpilers</h3>
<p><code>tsc</code> is TypeScript's own compiler. It does both type checking and code generation. For many projects, it is all you need.</p>
<p><code>esbuild</code> is an extremely fast bundler written in Go. It can transpile TypeScript to JavaScript (stripping types) but does not type-check. Many projects use <code>esbuild</code> for fast builds and <code>tsc --noEmit</code> for type checking.</p>
<p><code>SWC</code> (Speedy Web Compiler) is a Rust-based transpiler used by Next.js, Vite, and other tools. Like <code>esbuild</code>, it strips types without checking them.</p>
<p><code>Babel</code> with <code>@babel/preset-typescript</code> also strips types. It was once the primary alternative to <code>tsc</code> for compilation, but <code>esbuild</code> and <code>SWC</code> have largely supplanted it for new projects.</p>
<p><code>Vite</code> uses <code>esbuild</code> for development and Rollup (or Rolldown, its Rust rewrite) for production builds. It is the most popular build tool for new frontend projects as of 2026.</p>
<h3 id="linting">Linting</h3>
<p><code>ESLint</code> with <code>@typescript-eslint</code> is the standard linting setup. The <code>@typescript-eslint</code> package provides TypeScript-aware lint rules that go beyond what the compiler checks, like enforcing <code>===</code>, detecting redundant type assertions, and catching common patterns that lead to bugs.</p>
<p><code>Biome</code> is a newer Rust-based linter and formatter that is faster than ESLint. It supports TypeScript natively and is gaining adoption, especially in projects that value startup speed.</p>
<h3 id="testing">Testing</h3>
<p><code>Vitest</code> is the modern testing framework most commonly used with TypeScript. It runs on Vite, supports TypeScript out of the box, and is significantly faster than Jest for large projects.</p>
<p><code>Jest</code> with <code>ts-jest</code> or <code>@swc/jest</code> remains widely used, especially in existing projects. Configuration can be more involved than Vitest.</p>
<p><code>Type testing</code> is a category of its own. Libraries like <code>expect-type</code> and <code>tsd</code> let you write tests that verify type-level behavior, ensuring that your types produce the correct results.</p>
<h3 id="runtime-validation">Runtime Validation</h3>
<p>TypeScript types are erased at runtime. If you receive data from an API, a database, or user input, you cannot trust that it matches your TypeScript types. Runtime validation libraries bridge this gap:</p>
<p><code>Zod</code> is the most popular runtime validation library for TypeScript. You define a schema, and Zod infers the TypeScript type from it, keeping your runtime validation and your types in sync.</p>
<p><code>Valibot</code> is a smaller, tree-shakeable alternative to Zod with a functional API.</p>
<p><code>ArkType</code> defines types using a TypeScript-like syntax string, providing another approach to runtime validation with minimal overhead.</p>
<h3 id="package-publishing">Package Publishing</h3>
<p>If you publish a TypeScript library to npm, you need to emit both JavaScript and declaration files. The standard approach is to use <code>tsc</code> with <code>declaration: true</code> and <code>declarationMap: true</code>. For more complex setups, tools like <code>tsup</code> (built on <code>esbuild</code>) handle bundling, declaration generation, and dual CJS/ESM publishing.</p>
<p>TypeScript 5.5's <code>isolatedDeclarations</code> option enables tools other than <code>tsc</code> to generate declaration files, because each file contains enough type information to produce its declaration independently. This unlocks parallel declaration emit and faster builds in monorepos.</p>
<h3 id="node.js-native-typescript-support">Node.js Native TypeScript Support</h3>
<p>As of Node.js 23.6, you can run TypeScript files directly with <code>--experimental-strip-types</code>. Node.js uses the Amaro library (based on SWC's WASM build) to strip type annotations from your code before execution. This does not type-check — it simply removes the TypeScript syntax, leaving valid JavaScript.</p>
<p>The limitation is that only &quot;erasable&quot; syntax is supported: type annotations, interfaces, type aliases, and other constructs that have no runtime semantics. Enums (which generate JavaScript code), namespaces with values, and parameter properties in constructors are not supported under type stripping. TypeScript 5.8's <code>--erasableSyntaxOnly</code> flag ensures your code is compatible.</p>
<p>Bloomberg's <code>ts-blank-space</code> is a similar tool that replaces TypeScript syntax with whitespace, preserving line numbers so source maps are not needed for debugging.</p>
<h2 id="part-7-patterns-and-best-practices">Part 7: Patterns and Best Practices</h2>
<h3 id="start-strict-stay-strict">Start Strict, Stay Strict</h3>
<p>Always enable <code>strict: true</code> in your tsconfig (and as of TypeScript 6.0, it is the default). Every individual strictness flag catches real bugs. <code>noUncheckedIndexedAccess</code> is not part of <code>strict</code> but is highly recommended — it adds <code>undefined</code> to array element access, forcing you to handle the possibility that an index is out of bounds.</p>
<h3 id="avoid-any">Avoid <code>any</code></h3>
<p>The <code>any</code> type opts out of type checking. Every <code>any</code> in your codebase is a potential runtime error. Use <code>unknown</code> when you truly do not know a type, and narrow it with type guards. If you are working with third-party libraries that use <code>any</code>, consider wrapping them with properly typed interfaces.</p>
<h3 id="prefer-interfaces-for-object-shapes-type-aliases-for-everything-else">Prefer Interfaces for Object Shapes, Type Aliases for Everything Else</h3>
<p>Interfaces support declaration merging and can be extended, making them better for object shapes that might be augmented (like a library's public API). Type aliases are more versatile — they support unions, intersections, conditional types, and mapped types.</p>
<h3 id="use-discriminated-unions-for-state-management">Use Discriminated Unions for State Management</h3>
<p>Instead of optional properties and boolean flags, use discriminated unions:</p>
<pre><code class="language-typescript">// Bad
type Request = {
  status: &quot;loading&quot; | &quot;success&quot; | &quot;error&quot;;
  data?: ResponseData;
  error?: Error;
};

// Good
type Request =
  | { status: &quot;loading&quot; }
  | { status: &quot;success&quot;; data: ResponseData }
  | { status: &quot;error&quot;; error: Error };
</code></pre>
<p>The discriminated union makes it impossible to access <code>data</code> when the status is <code>&quot;error&quot;</code> or <code>error</code> when the status is <code>&quot;success&quot;</code>.</p>
<h3 id="use-as-const-for-literal-inference">Use <code>as const</code> for Literal Inference</h3>
<p>When you want TypeScript to infer the narrowest possible type, use <code>as const</code>:</p>
<pre><code class="language-typescript">const config = {
  endpoint: &quot;https://api.example.com&quot;,
  retries: 3,
  methods: [&quot;GET&quot;, &quot;POST&quot;],
} as const;

// config.endpoint is &quot;https://api.example.com&quot;, not string
// config.retries is 3, not number
// config.methods is readonly [&quot;GET&quot;, &quot;POST&quot;], not string[]
</code></pre>
<h3 id="validate-external-data-at-the-boundary">Validate External Data at the Boundary</h3>
<p>TypeScript's types are erased at runtime. Data from APIs, databases, local storage, and user input should be validated using a runtime validation library like Zod. Define the schema once and let the library infer the TypeScript type:</p>
<pre><code class="language-typescript">import { z } from &quot;zod&quot;;

const UserSchema = z.object({
  id: z.number(),
  name: z.string(),
  email: z.string().email(),
});

type User = z.infer&lt;typeof UserSchema&gt;; // { id: number; name: string; email: string }

const response = await fetch(&quot;/api/users/1&quot;);
const user = UserSchema.parse(await response.json()); // validates and returns typed User
</code></pre>
<h3 id="use-project-references-for-large-codebases">Use Project References for Large Codebases</h3>
<p>For monorepos and large projects, TypeScript's project references (<code>composite: true</code> and <code>references</code> in tsconfig) enable incremental builds that only recompile changed projects. Combined with <code>--build</code> mode, this can dramatically reduce build times.</p>
<h3 id="prefer-ecmascript-features-over-typescript-only-features">Prefer ECMAScript Features Over TypeScript-Only Features</h3>
<p>TypeScript's enums, namespaces, and parameter properties have runtime semantics that are not part of the ECMAScript standard. Prefer standard alternatives: string literal unions instead of enums, ES modules instead of namespaces, and explicit property assignment in constructors instead of parameter properties. This makes your code compatible with type stripping, Node.js native TypeScript support, and the broader JavaScript ecosystem.</p>
<h2 id="part-8-common-pitfalls-and-how-to-avoid-them">Part 8: Common Pitfalls and How to Avoid Them</h2>
<h3 id="the-object.keys-problem">The <code>Object.keys</code> Problem</h3>
<p><code>Object.keys()</code> returns <code>string[]</code>, not <code>(keyof T)[]</code>:</p>
<pre><code class="language-typescript">const user = { name: &quot;Alice&quot;, age: 30 };
const keys = Object.keys(user); // string[], not (&quot;name&quot; | &quot;age&quot;)[]
</code></pre>
<p>This is by design — JavaScript objects can have additional properties at runtime that TypeScript does not know about. If you are certain of the object's shape, you can cast: <code>(Object.keys(user) as (keyof typeof user)[])</code>.</p>
<h3 id="structural-vs-nominal-typing">Structural vs Nominal Typing</h3>
<p>TypeScript uses structural typing, meaning that any object with the right shape is assignable to a type, regardless of its name:</p>
<pre><code class="language-typescript">interface Cat { name: string; meow(): void }
interface Dog { name: string; meow(): void }

const cat: Cat = { name: &quot;Whiskers&quot;, meow() {} };
const dog: Dog = cat; // This works! They have the same shape.
</code></pre>
<p>If you need nominal typing (types that are distinct even with the same shape), use branded types:</p>
<pre><code class="language-typescript">type USD = number &amp; { __brand: &quot;USD&quot; };
type EUR = number &amp; { __brand: &quot;EUR&quot; };

function toUSD(amount: number): USD { return amount as USD; }
function toEUR(amount: number): EUR { return amount as EUR; }

const dollars: USD = toUSD(100);
const euros: EUR = toEUR(85);
// dollars = euros; // Error — different brands
</code></pre>
<h3 id="type-assertions-are-escape-hatches">Type Assertions Are Escape Hatches</h3>
<p><code>as</code> assertions tell the compiler to trust you. They are not runtime checks:</p>
<pre><code class="language-typescript">const value = someFunction() as string; // No runtime check!
</code></pre>
<p>If <code>someFunction()</code> returns a number, you will get a runtime error. Prefer type narrowing over type assertions whenever possible.</p>
<h3 id="index-signatures-and-undefined">Index Signatures and <code>undefined</code></h3>
<p>Without <code>noUncheckedIndexedAccess</code>, accessing an object with an index signature does not add <code>undefined</code>:</p>
<pre><code class="language-typescript">interface Cache {
  [key: string]: string;
}

const cache: Cache = {};
const value = cache[&quot;missing&quot;]; // string, but actually undefined at runtime!
</code></pre>
<p>Enable <code>noUncheckedIndexedAccess</code> to make this <code>string | undefined</code>.</p>
<h2 id="part-9-what-lies-ahead">Part 9: What Lies Ahead</h2>
<h3 id="the-typescript-7.0-transition">The TypeScript 7.0 Transition</h3>
<p>The transition from TypeScript 6.0 to 7.0 will be the most significant upgrade most TypeScript developers experience. The language is unchanged, but the tooling pipeline changes fundamentally. Teams should take these steps:</p>
<p>Audit your tsconfig for deprecated options now. Upgrade to TypeScript 6.0 and resolve all deprecation warnings. Test with <code>@typescript/native-preview</code> (<code>tsgo --noEmit</code>) in your CI pipeline. Identify any custom plugins, transformers, or tools that depend on the TSServer protocol or TypeScript's JavaScript API. Monitor the TypeScript 7.0 iteration plan for the stable release date.</p>
<h3 id="ecmascript-proposals-to-watch">ECMAScript Proposals to Watch</h3>
<p>Several in-progress ECMAScript proposals will affect TypeScript when they reach Stage 3 or 4:</p>
<p>The Pattern Matching proposal would add a <code>match</code> expression to JavaScript, similar to Rust's <code>match</code> or Scala's pattern matching. TypeScript would provide type narrowing within each pattern arm.</p>
<p>The Type Annotations proposal (ECMAScript type comments) would add syntax for type annotations directly to JavaScript. If adopted, it could eventually mean that TypeScript's type syntax becomes part of JavaScript itself — though the types would be ignored at runtime, just like comments. This is conceptually similar to how Node.js's type stripping works today, but standardized.</p>
<p>The Pipe Operator proposal (<code>|&gt;</code>) would enable functional-style composition. TypeScript would need to infer types through pipe chains.</p>
<h3 id="the-broader-trend-native-speed-javascript-tooling">The Broader Trend: Native-Speed JavaScript Tooling</h3>
<p>TypeScript 7's Go rewrite is part of a larger trend in the JavaScript ecosystem. <code>esbuild</code> is written in Go. <code>SWC</code> and <code>Biome</code> are written in Rust. <code>Rolldown</code> (the Vite bundler) is written in Rust. <code>Oxc</code> (a JavaScript/TypeScript toolchain) is written in Rust. The era of writing JavaScript developer tools in JavaScript is ending. These native-speed tools reduce build times from minutes to seconds, and the performance gains compound in large codebases and CI/CD pipelines.</p>
<h2 id="conclusion">Conclusion</h2>
<p>TypeScript has come an extraordinarily long way from its 2012 debut as &quot;JavaScript with types.&quot; It has become the default language for frontend development, a major force in backend Node.js development, and increasingly used in mobile and edge computing. Its type system is among the most expressive of any mainstream language, capable of catching entire categories of bugs at compile time while remaining fully compatible with the vast JavaScript ecosystem.</p>
<p>The story of TypeScript in 2026 is one of convergence. The language is converging with JavaScript as more TypeScript syntax becomes natively supported in Node.js and potentially in the ECMAScript standard itself. The tooling is converging on native speed as the Go rewrite promises 10x faster builds. And the defaults are converging on strictness as TypeScript 6.0 makes <code>strict: true</code> the default for all new projects.</p>
<p>Whether you are just starting with TypeScript or have been using it for years, there has never been a better time to invest in understanding it deeply. The language is stable, the ecosystem is mature, the tooling is about to get dramatically faster, and the community is larger than ever. Every line of TypeScript you write today will benefit from the performance improvements, editor enhancements, and ecosystem refinements that are coming in the months ahead.</p>
]]></content:encoded>
      <category>typescript</category>
      <category>javascript</category>
      <category>programming-languages</category>
      <category>web-development</category>
      <category>tutorial</category>
      <category>deep-dive</category>
    </item>
    <item>
      <title>Git From First Principles, and Why Trunk-Based Development Will Save Your Team</title>
      <link>https://observermagazine.github.io/blog/git-and-trunk-based-development</link>
      <description>A comprehensive deep dive into Git as a version control system — every command, every workflow, every configuration. Then, a persuasive case for trunk-based development aimed at teams reluctant to leave long-lived branches behind. Backed by a decade of DORA research.</description>
      <pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/git-and-trunk-based-development</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="part-1-git-from-first-principles">Part 1: Git From First Principles</h2>
<h3 id="what-is-version-control-and-why-does-it-exist">What Is Version Control, and Why Does It Exist?</h3>
<p>Before version control systems existed, developers maintained multiple copies of their source code by hand — renaming folders to things like <code>project-v2-final-FINAL-fixed</code> and hoping they could remember which copy was which. When two developers needed to work on the same file, they would shout across the office or send emails with zipped attachments. This was expensive, error-prone, and utterly unsustainable.</p>
<p>Version control systems solve this by tracking every change to every file over time, allowing multiple people to work on the same codebase simultaneously, and providing the ability to revert to any previous state. Git is the dominant version control system today, with approximately 85% market share among software development teams.</p>
<h3 id="a-brief-history-from-locks-to-distributed-merging">A Brief History: From Locks to Distributed Merging</h3>
<p>Version control evolved through three generations, each expanding the ability to work in parallel.</p>
<p><strong>First generation (1970s–1980s):</strong> Systems like SCCS and RCS used a lock-edit-unlock model. Only one person could edit a file at a time. Everyone else had to wait. This was safe but slow.</p>
<p><strong>Second generation (1990s–2000s):</strong> Systems like CVS, Subversion (SVN), and Team Foundation Version Control (TFVC — the version control component of TFS/Azure DevOps) introduced a centralized server model with merge-based concurrent editing. Multiple people could edit the same file simultaneously, and the system would merge their changes. But you needed a network connection to the central server for most operations — committing, viewing history, branching.</p>
<p><strong>Third generation (2005–present):</strong> Distributed systems like Git and Mercurial gave every developer a complete copy of the entire repository, including its full history. You can commit, branch, view history, and diff entirely offline. You synchronize with teammates by pushing and pulling changesets between repositories. Linus Torvalds created Git in 2005 specifically for Linux kernel development, where thousands of developers needed to work independently across time zones without a single point of failure.</p>
<h3 id="how-git-thinks-snapshots-not-diffs">How Git Thinks: Snapshots, Not Diffs</h3>
<p>Most version control systems store data as a list of file-based changes (deltas). Git is fundamentally different — it thinks of its data as a series of <strong>snapshots</strong> of the entire project at each point in time. When you commit, Git takes a snapshot of every file in your staging area and stores a reference to that snapshot. If a file has not changed, Git does not store it again; it stores a pointer to the previous identical file.</p>
<p>Every piece of data in Git is checksummed with SHA-1 (or SHA-256 in newer versions) before it is stored. This means Git knows if any file has been corrupted or tampered with. You cannot change the contents of any file or directory without Git knowing.</p>
<h3 id="the-three-states">The Three States</h3>
<p>Every file in a Git working directory exists in one of three states:</p>
<p><strong>Modified</strong> means you have changed the file in your working directory but have not staged it yet.</p>
<p><strong>Staged</strong> means you have marked a modified file to be included in your next commit snapshot.</p>
<p><strong>Committed</strong> means the data is safely stored in your local Git database.</p>
<p>This gives rise to the three main sections of a Git project:</p>
<ol>
<li><strong>Working Directory</strong> — the actual files on your disk</li>
<li><strong>Staging Area</strong> (also called the &quot;index&quot;) — a file that stores information about what will go into your next commit</li>
<li><strong>Git Directory</strong> (the <code>.git</code> folder) — where Git stores the metadata and object database for your project</li>
</ol>
<p>The basic Git workflow is: you modify files in your working directory, you stage the changes you want to include, and then you commit, which takes the staged snapshot and stores it permanently in the Git directory.</p>
<h2 id="part-2-every-command-you-need">Part 2: Every Command You Need</h2>
<h3 id="setup-and-configuration">Setup and Configuration</h3>
<p>Before your first commit, configure your identity:</p>
<pre><code class="language-bash"># Set your name and email (stored in commits)
git config --global user.name &quot;Your Name&quot;
git config --global user.email &quot;your.email@example.com&quot;

# Set default branch name to 'main'
git config --global init.defaultBranch main

# Set default editor (for commit messages)
git config --global core.editor &quot;code --wait&quot;  # VS Code
git config --global core.editor &quot;vim&quot;           # Vim
git config --global core.editor &quot;notepad&quot;       # Notepad on Windows

# Enable colored output
git config --global color.ui auto

# Set line ending behavior
git config --global core.autocrlf true   # Windows (converts LF to CRLF)
git config --global core.autocrlf input  # Mac/Linux (converts CRLF to LF on commit)

# View all configuration
git config --list --show-origin
</code></pre>
<p>Git configuration has three levels, each overriding the previous:</p>
<ul>
<li><strong>System</strong> (<code>/etc/gitconfig</code>) — applies to every user on the machine</li>
<li><strong>Global</strong> (<code>~/.gitconfig</code>) — applies to your user account</li>
<li><strong>Local</strong> (<code>.git/config</code> in a repository) — applies to that specific repository</li>
</ul>
<h3 id="creating-and-cloning-repositories">Creating and Cloning Repositories</h3>
<pre><code class="language-bash"># Initialize a new repository in the current directory
git init

# Initialize a new repository in a new directory
git init my-project

# Clone an existing repository
git clone https://github.com/user/repo.git

# Clone into a specific directory
git clone https://github.com/user/repo.git my-local-name

# Clone only the most recent commit (shallow clone, saves bandwidth)
git clone --depth 1 https://github.com/user/repo.git

# Clone a specific branch
git clone --branch develop https://github.com/user/repo.git
</code></pre>
<h3 id="staging-and-committing">Staging and Committing</h3>
<pre><code class="language-bash"># Check the status of your files
git status

# Short status (more compact output)
git status -s

# Stage a specific file
git add README.md

# Stage multiple specific files
git add file1.cs file2.cs file3.cs

# Stage all changes in a directory
git add src/

# Stage all changes in the entire repository
git add .

# Stage all tracked files that have been modified (ignores new untracked files)
git add -u

# Interactively stage parts of files (choose which hunks to stage)
git add -p

# Unstage a file (remove from staging area, keep changes in working directory)
git restore --staged README.md

# Discard changes in working directory (DANGEROUS — cannot be undone)
git restore README.md

# Commit staged changes with a message
git commit -m &quot;Add user authentication module&quot;

# Commit with a multi-line message
git commit -m &quot;Add user authentication module&quot; -m &quot;Implements JWT-based auth with refresh tokens.
Closes #42.&quot;

# Stage all tracked modified files AND commit in one step
git commit -am &quot;Fix null reference in OrderService&quot;

# Amend the most recent commit (change message or add forgotten files)
git add forgotten-file.cs
git commit --amend -m &quot;Add user authentication module (with tests)&quot;

# Amend without changing the message
git commit --amend --no-edit

# Create an empty commit (useful for triggering CI)
git commit --allow-empty -m &quot;Trigger CI rebuild&quot;
</code></pre>
<h3 id="viewing-history">Viewing History</h3>
<pre><code class="language-bash"># View commit log
git log

# Compact one-line format
git log --oneline

# Show a graph of branches
git log --oneline --graph --all

# Show the last 5 commits
git log -5

# Show commits that changed a specific file
git log -- src/Program.cs

# Show commits by a specific author
git log --author=&quot;Alice&quot;

# Show commits containing a search term in the message
git log --grep=&quot;authentication&quot;

# Show commits between two dates
git log --after=&quot;2026-01-01&quot; --before=&quot;2026-03-01&quot;

# Show the diff introduced by each commit
git log -p

# Show stats (files changed, insertions, deletions)
git log --stat

# Show a pretty custom format
git log --pretty=format:&quot;%h %ad | %s%d [%an]&quot; --date=short

# Find which commit introduced a specific line of code
git log -S &quot;connectionString&quot; --oneline

# Show who last modified each line of a file (blame)
git blame src/Services/AuthService.cs

# Show blame for a specific range of lines
git blame -L 10,20 src/Services/AuthService.cs
</code></pre>
<h3 id="branching">Branching</h3>
<p>Branches in Git are incredibly lightweight — a branch is simply a pointer (a 41-byte file) to a specific commit. Creating a branch is nearly instantaneous regardless of repository size.</p>
<pre><code class="language-bash"># List local branches
git branch

# List all branches (including remote-tracking branches)
git branch -a

# List branches with their last commit
git branch -v

# Create a new branch (does NOT switch to it)
git branch feature/user-profile

# Create a new branch AND switch to it
git checkout -b feature/user-profile
# Modern equivalent (Git 2.23+):
git switch -c feature/user-profile

# Switch to an existing branch
git checkout main
# Modern equivalent:
git switch main

# Rename a branch
git branch -m old-name new-name

# Rename the current branch
git branch -m new-name

# Delete a branch (only if fully merged)
git branch -d feature/user-profile

# Force delete a branch (even if not merged — DANGEROUS)
git branch -D feature/user-profile

# Delete a remote branch
git push origin --delete feature/user-profile
</code></pre>
<h3 id="merging">Merging</h3>
<pre><code class="language-bash"># Merge a branch into the current branch
git merge feature/user-profile

# Merge with a merge commit even if fast-forward is possible
git merge --no-ff feature/user-profile

# Abort a merge in progress (if there are conflicts)
git merge --abort

# Continue a merge after resolving conflicts
git add .  # Stage the resolved files
git merge --continue
# Or equivalently:
git commit
</code></pre>
<p><strong>Fast-forward merge</strong> happens when the target branch has no new commits since the feature branch was created. Git simply moves the pointer forward. No merge commit is created.</p>
<p><strong>Three-way merge</strong> happens when both branches have diverged. Git creates a new &quot;merge commit&quot; with two parents.</p>
<p><strong>Merge conflicts</strong> occur when the same lines in the same file were modified differently in both branches. Git marks these in the file:</p>
<pre><code>&lt;&lt;&lt;&lt;&lt;&lt;&lt; HEAD
    return user.GetFullName();
=======
    return $&quot;{user.FirstName} {user.LastName}&quot;;
&gt;&gt;&gt;&gt;&gt;&gt;&gt; feature/user-profile
</code></pre>
<p>You resolve the conflict by editing the file to the desired final state, removing the markers, staging the file, and completing the merge.</p>
<h3 id="rebasing">Rebasing</h3>
<p>Rebase is an alternative to merging. Instead of creating a merge commit, it replays your commits on top of the target branch, creating a linear history.</p>
<pre><code class="language-bash"># Rebase current branch onto main
git rebase main

# Interactive rebase — edit, squash, reorder, or drop commits
git rebase -i main

# Interactive rebase of the last 3 commits
git rebase -i HEAD~3

# Abort a rebase in progress
git rebase --abort

# Continue after resolving a conflict during rebase
git add .
git rebase --continue

# Skip a problematic commit during rebase
git rebase --skip
</code></pre>
<p>In interactive rebase (<code>git rebase -i</code>), you get an editor showing commits with action keywords:</p>
<pre><code>pick abc1234 Add user model
pick def5678 Add user service
pick ghi9012 Fix typo in user service

# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like squash, but discard this commit's message
# d, drop = remove commit
</code></pre>
<p><strong>The golden rule of rebasing:</strong> Never rebase commits that have been pushed to a shared branch that others are working from. Rebasing rewrites commit history — if someone else has based their work on the original commits, their history will diverge from yours, causing confusion and pain.</p>
<h3 id="remote-repositories">Remote Repositories</h3>
<pre><code class="language-bash"># List remotes
git remote -v

# Add a remote
git remote add origin https://github.com/user/repo.git

# Add a second remote (e.g., a fork)
git remote add upstream https://github.com/original/repo.git

# Change a remote's URL
git remote set-url origin https://github.com/user/new-repo.git

# Remove a remote
git remote remove upstream

# Fetch changes from a remote (does NOT merge)
git fetch origin

# Fetch from all remotes
git fetch --all

# Pull (fetch + merge) from the remote
git pull origin main

# Pull with rebase instead of merge
git pull --rebase origin main

# Push to a remote
git push origin main

# Push and set upstream tracking
git push -u origin feature/user-profile

# Push all branches
git push --all origin

# Push tags
git push --tags

# Force push (DANGEROUS — overwrites remote history)
git push --force origin feature/user-profile

# Force push with safety (only overwrites if remote hasn't changed)
git push --force-with-lease origin feature/user-profile
</code></pre>
<h3 id="stashing">Stashing</h3>
<p>Stash temporarily shelves changes so you can work on something else:</p>
<pre><code class="language-bash"># Stash all modified tracked files
git stash

# Stash with a description
git stash push -m &quot;WIP: halfway through refactoring auth&quot;

# Stash including untracked files
git stash -u

# List all stashes
git stash list

# Apply the most recent stash (keeps it in stash list)
git stash apply

# Apply a specific stash
git stash apply stash@{2}

# Apply and remove from stash list
git stash pop

# Drop a specific stash
git stash drop stash@{0}

# Clear all stashes
git stash clear

# Create a branch from a stash
git stash branch new-branch-name stash@{0}
</code></pre>
<h3 id="tagging">Tagging</h3>
<p>Tags are permanent bookmarks for specific commits, typically used for releases:</p>
<pre><code class="language-bash"># List tags
git tag

# List tags matching a pattern
git tag -l &quot;v1.*&quot;

# Create a lightweight tag (just a pointer)
git tag v1.0.0

# Create an annotated tag (stores tagger info, date, message)
git tag -a v1.0.0 -m &quot;Release version 1.0.0&quot;

# Tag a specific commit
git tag -a v1.0.0 abc1234 -m &quot;Release version 1.0.0&quot;

# Push a specific tag
git push origin v1.0.0

# Push all tags
git push origin --tags

# Delete a local tag
git tag -d v1.0.0

# Delete a remote tag
git push origin --delete v1.0.0
</code></pre>
<h3 id="undoing-things">Undoing Things</h3>
<pre><code class="language-bash"># Undo the last commit, keep changes staged
git reset --soft HEAD~1

# Undo the last commit, keep changes in working directory (unstaged)
git reset --mixed HEAD~1  # --mixed is the default

# Undo the last commit, DISCARD all changes (DANGEROUS)
git reset --hard HEAD~1

# Reset a single file to the last committed version
git checkout HEAD -- src/Program.cs
# Modern equivalent:
git restore src/Program.cs

# Create a new commit that reverses a previous commit
# (safe for shared branches — doesn't rewrite history)
git revert abc1234

# Revert a merge commit (must specify which parent to keep)
git revert -m 1 abc1234

# Recover a &quot;lost&quot; commit (Git keeps everything for ~30 days)
git reflog
git checkout abc1234  # or git cherry-pick abc1234
</code></pre>
<h3 id="cherry-picking">Cherry-Picking</h3>
<p>Apply a specific commit from one branch to another:</p>
<pre><code class="language-bash"># Apply a single commit to the current branch
git cherry-pick abc1234

# Apply multiple commits
git cherry-pick abc1234 def5678

# Cherry-pick without committing (just stage the changes)
git cherry-pick --no-commit abc1234

# Abort a cherry-pick
git cherry-pick --abort
</code></pre>
<h3 id="advanced-bisect-clean-archive">Advanced: Bisect, Clean, Archive</h3>
<pre><code class="language-bash"># Binary search for a bug-introducing commit
git bisect start
git bisect bad          # Current commit is broken
git bisect good v1.0.0  # This tag was known good
# Git checks out the middle commit. Test it, then:
git bisect good  # if this commit works
git bisect bad   # if this commit is broken
# Repeat until Git identifies the exact commit
git bisect reset  # Return to your original branch

# Remove untracked files
git clean -n    # Dry run (show what would be deleted)
git clean -f    # Actually delete untracked files
git clean -fd   # Delete untracked files and directories
git clean -fX   # Delete only ignored files (clean build artifacts)

# Create an archive of the repository
git archive --format=zip HEAD &gt; project.zip
git archive --format=tar.gz --prefix=project/ HEAD &gt; project.tar.gz
</code></pre>
<h3 id="the.gitignore-file">The .gitignore File</h3>
<p><code>.gitignore</code> tells Git which files and directories to never track:</p>
<pre><code class="language-gitignore"># Compiled output
bin/
obj/
publish/
*.dll
*.exe
*.pdb

# IDE files
.vs/
.vscode/
*.user
*.suo
.idea/

# OS files
.DS_Store
Thumbs.db

# Environment and secrets
.env
appsettings.Development.json

# NuGet packages
packages/

# Python
__pycache__/
*.pyc
.venv/

# Node
node_modules/

# Logs
*.log

# Negate a pattern (force include something that would otherwise be ignored)
!important.log
</code></pre>
<p>Patterns work as follows:</p>
<ul>
<li><code>*.log</code> matches any file ending in <code>.log</code></li>
<li><code>bin/</code> matches a directory named <code>bin</code> anywhere in the repo</li>
<li><code>/bin/</code> matches <code>bin</code> only at the repository root</li>
<li><code>**/logs</code> matches <code>logs</code> directories anywhere in the hierarchy</li>
<li><code>!</code> negates a pattern (force includes something)</li>
</ul>
<h3 id="git-aliases">Git Aliases</h3>
<p>Create shortcuts for frequently used commands:</p>
<pre><code class="language-bash">git config --global alias.co checkout
git config --global alias.br branch
git config --global alias.ci commit
git config --global alias.st status
git config --global alias.unstage &quot;restore --staged&quot;
git config --global alias.last &quot;log -1 HEAD&quot;
git config --global alias.lg &quot;log --oneline --graph --all --decorate&quot;
git config --global alias.amend &quot;commit --amend --no-edit&quot;
</code></pre>
<p>Now <code>git lg</code> gives you a beautiful branch graph, <code>git co main</code> switches to main, and <code>git amend</code> amends the last commit without changing the message.</p>
<h3 id="git-hooks">Git Hooks</h3>
<p>Git hooks are scripts that run automatically at certain points in the Git workflow. They live in <code>.git/hooks/</code> (local, not committed) or can be managed with tools like Husky or pre-commit.</p>
<p>Common hooks:</p>
<ul>
<li><code>pre-commit</code> — runs before a commit is created (lint, format, run fast tests)</li>
<li><code>commit-msg</code> — validates the commit message format</li>
<li><code>pre-push</code> — runs before pushing (run full test suite)</li>
<li><code>post-merge</code> — runs after a merge (restore NuGet packages, run migrations)</li>
</ul>
<p>Example <code>pre-commit</code> hook that runs dotnet format:</p>
<pre><code class="language-bash">#!/bin/sh
# .git/hooks/pre-commit

dotnet format --verify-no-changes
if [ $? -ne 0 ]; then
    echo &quot;Code formatting issues found. Run 'dotnet format' to fix.&quot;
    exit 1
fi
</code></pre>
<h2 id="part-3-branching-workflows">Part 3: Branching Workflows</h2>
<h3 id="gitflow-the-heavyweight">Gitflow (The Heavyweight)</h3>
<p>Gitflow, popularized by Vincent Driessen in 2010, uses multiple long-lived branches:</p>
<ul>
<li><code>main</code> (or <code>master</code>) — always reflects production</li>
<li><code>develop</code> — integration branch for the next release</li>
<li><code>feature/*</code> — one branch per feature, branched from and merged back to <code>develop</code></li>
<li><code>release/*</code> — preparation for a production release, branched from <code>develop</code>, merged to both <code>main</code> and <code>develop</code></li>
<li><code>hotfix/*</code> — urgent production fixes, branched from <code>main</code>, merged to both <code>main</code> and <code>develop</code></li>
</ul>
<p>Gitflow was designed for projects with scheduled releases and multiple supported versions. It provides strict control but at the cost of significant complexity. Dave Farley, co-author of <em>Continuous Delivery</em>, has argued publicly that Gitflow contradicts CI/CD principles because it delays integration and introduces complexity that slows teams down.</p>
<h3 id="github-flow-the-lightweight">GitHub Flow (The Lightweight)</h3>
<p>GitHub Flow is a simplified model:</p>
<ol>
<li><code>main</code> is always deployable</li>
<li>Create a branch from <code>main</code> for your work</li>
<li>Make commits on your branch</li>
<li>Open a pull request</li>
<li>Get code review</li>
<li>Merge to <code>main</code></li>
<li>Deploy</li>
</ol>
<p>This is simpler than Gitflow but still relies on feature branches that can become long-lived if the developer does not merge frequently.</p>
<h3 id="trunk-based-development-the-streamlined">Trunk-Based Development (The Streamlined)</h3>
<p>Trunk-based development is the simplest model. There is one branch: <code>main</code> (the trunk). All developers commit to the trunk at least once every 24 hours. There are no long-lived branches. For teams that need code review, short-lived feature branches (lasting hours or at most a day or two) are used, but they are merged to trunk quickly.</p>
<p>This is what we are going to argue for in Part 4.</p>
<h2 id="part-4-the-case-for-trunk-based-development">Part 4: The Case for Trunk-Based Development</h2>
<p>This section is for the team that is hesitant. You have been using TFS (Team Foundation Server, now Azure DevOps) for years. Your workflow involves multiple long-lived branches — a <code>develop</code> branch, release branches, feature branches that live for weeks or months, and hotfix branches. You have code spanning multiple sprints. You know your current workflow. It works, mostly. Why change?</p>
<p>Because the evidence says you should.</p>
<h3 id="the-evidence-dora-and-accelerate">The Evidence: DORA and Accelerate</h3>
<p>The DevOps Research and Assessment (DORA) program, founded by Dr. Nicole Forsgren, Gene Kim, and Jez Humble and now part of Google Cloud, is the largest and longest-running academically rigorous research investigation into software delivery performance. Since 2014, their annual State of DevOps reports have surveyed tens of thousands of professionals across thousands of organizations.</p>
<p>Their findings, published in the book <em>Accelerate: The Science of Lean Software and DevOps</em> (2018), are unambiguous: trunk-based development is a statistically significant predictor of higher software delivery performance.</p>
<p>DORA measures performance using five key metrics: deployment frequency (how often you deploy to production), lead time for changes (how long from commit to production), change failure rate (what percentage of deployments cause failures), failed deployment recovery time (how quickly you fix failures), and reliability (how consistently your service meets performance goals).</p>
<p>Their research has consistently shown that speed and stability are not tradeoffs — elite performers do well across all five metrics, while low performers do poorly across all of them. This directly contradicts the intuition that moving faster means more breakage.</p>
<p>Elite performers who meet their reliability targets are 2.3 times more likely to practice trunk-based development than their peers. Elite performing teams deploy multiple times per day, have change lead times under 26 hours, maintain change failure rates below 1%, and recover from failures in less than 6 hours.</p>
<p>The research is clear: organizations that practice trunk-based development with continuous integration achieve higher delivery throughput AND higher stability than organizations using long-lived feature branches.</p>
<h3 id="why-long-lived-branches-are-an-antipattern">Why Long-Lived Branches Are an Antipattern</h3>
<p>Every day a branch lives, it accumulates divergence from the trunk. This divergence creates three escalating problems.</p>
<p><strong>Merge conflicts grow exponentially.</strong> When two developers both modify the same module over the course of a sprint, the number of potential conflicts grows with each passing day. A branch that lives for two weeks will have significantly more conflicts than one that lives for two hours. These are not just textual conflicts that Git can flag — they are semantic conflicts where the code merges cleanly but the behavior is wrong. Your tests might pass individually on each branch but fail when the branches are combined. The longer you wait to integrate, the harder and riskier the integration becomes.</p>
<p><strong>Feedback is delayed.</strong> When your code sits on a feature branch for three weeks, nobody else sees it. Nobody uses it. Nobody discovers that it conflicts with what they are building. Nobody discovers that it breaks a subtle assumption in another module. You do not learn about these problems until merge day, when it is hardest and most expensive to fix them. Thierry de Pauw, writing about trunk-based development benefits, makes this point forcefully: when you work on trunk, your work-in-progress gets used by your whole team before any actual user sees it, and they find bugs that they would never find if you were isolated on a feature branch.</p>
<p><strong>Integration becomes a terrifying event.</strong> When you merge a branch that has been alive for weeks, the merge is large, risky, and stressful. This is what the DevOps Handbook calls &quot;deployment pain&quot; — the anxiety that comes with pushing large batches of changes. Teams that experience this pain naturally merge less often, which makes each merge even larger and more painful. It is a vicious cycle.</p>
<p>Martin Fowler, in his comprehensive article on branching patterns, observes that &quot;feature branching is a poor man's modular architecture, instead of building systems with the ability to easily swap in and out features at runtime/deploytime they couple themselves to the source control providing this mechanism through manual merging.&quot; In other words, long-lived branches are often a symptom of poor architecture, not a solution to it.</p>
<h3 id="but-our-features-span-multiple-sprints">&quot;But Our Features Span Multiple Sprints!&quot;</h3>
<p>This is the most common objection, and it reveals a fundamental misunderstanding. Trunk-based development does not mean you cannot work on large features. It means you do not use long-lived branches to isolate that work. Instead, you use two techniques: feature flags and branch by abstraction.</p>
<h4 id="feature-flags">Feature Flags</h4>
<p>A feature flag (also called a feature toggle) is a conditional in your code that controls whether a feature is visible to users. You merge your work-in-progress to trunk behind a flag. The code is in production, running through CI, being integrated with everyone else's work — but users do not see it until you flip the flag.</p>
<p>In a .NET application, this can be as simple as:</p>
<pre><code class="language-csharp">// A simple feature flag using configuration
public class FeatureFlags
{
    public bool EnableNewCheckoutFlow { get; set; }
    public bool EnableAdvancedSearch { get; set; }
    public bool EnableBulkImport { get; set; }
}

// In Program.cs / Startup
builder.Services.Configure&lt;FeatureFlags&gt;(
    builder.Configuration.GetSection(&quot;Features&quot;));

// In your service or controller
public class CheckoutService
{
    private readonly FeatureFlags _flags;

    public CheckoutService(IOptions&lt;FeatureFlags&gt; flags) =&gt;
        _flags = flags.Value;

    public async Task&lt;Order&gt; ProcessCheckout(Cart cart)
    {
        if (_flags.EnableNewCheckoutFlow)
            return await ProcessNewCheckout(cart);
        else
            return await ProcessLegacyCheckout(cart);
    }
}
</code></pre>
<pre><code class="language-json">// appsettings.json (production — flag off)
{
  &quot;Features&quot;: {
    &quot;EnableNewCheckoutFlow&quot;: false,
    &quot;EnableAdvancedSearch&quot;: false,
    &quot;EnableBulkImport&quot;: true
  }
}
</code></pre>
<pre><code class="language-json">// appsettings.Development.json (local dev — flag on)
{
  &quot;Features&quot;: {
    &quot;EnableNewCheckoutFlow&quot;: true,
    &quot;EnableAdvancedSearch&quot;: true,
    &quot;EnableBulkImport&quot;: true
  }
}
</code></pre>
<p>Martin Fowler categorizes feature flags into several types: release toggles (to hide incomplete features), experiment toggles (for A/B testing), ops toggles (to disable features under load), and permission toggles (to enable features for specific users). Release toggles — the type most relevant to trunk-based development — should be short-lived. Once a feature is complete and released, remove the flag. Pete Hodgson, writing on martinfowler.com, warns that feature flags have a carrying cost and should be treated as inventory — teams should proactively work to keep the number of active flags as low as possible. Knight Capital Group's famous $460 million loss is a cautionary tale about what happens when old feature flags are not cleaned up.</p>
<h4 id="branch-by-abstraction">Branch by Abstraction</h4>
<p>Branch by abstraction, a technique named by Paul Hammant and documented extensively by Martin Fowler, is for large-scale infrastructure changes — replacing a database, swapping an ORM, rewriting a major subsystem. The idea is to create an abstraction layer (an interface) between the code that uses a component and the component itself, then gradually swap out the implementation behind that abstraction.</p>
<p>Here is a concrete .NET example. Suppose you are migrating from Dapper to Entity Framework:</p>
<pre><code class="language-csharp">// Step 1: Create the abstraction
public interface IOrderRepository
{
    Task&lt;Order?&gt; GetByIdAsync(Guid id);
    Task&lt;IReadOnlyList&lt;Order&gt;&gt; GetRecentAsync(int count);
    Task CreateAsync(Order order);
    Task UpdateAsync(Order order);
}

// Step 2: Wrap the existing Dapper implementation
public class DapperOrderRepository : IOrderRepository
{
    private readonly IDbConnection _db;

    public DapperOrderRepository(IDbConnection db) =&gt; _db = db;

    public async Task&lt;Order?&gt; GetByIdAsync(Guid id) =&gt;
        await _db.QueryFirstOrDefaultAsync&lt;Order&gt;(
            &quot;SELECT * FROM Orders WHERE Id = @Id&quot;, new { Id = id });

    // ... other methods using Dapper
}

// Step 3: Build the new EF implementation alongside it
public class EfOrderRepository : IOrderRepository
{
    private readonly AppDbContext _context;

    public EfOrderRepository(AppDbContext context) =&gt; _context = context;

    public async Task&lt;Order?&gt; GetByIdAsync(Guid id) =&gt;
        await _context.Orders.FindAsync(id);

    // ... other methods using EF
}

// Step 4: Use a feature flag to switch between them
builder.Services.AddScoped&lt;IOrderRepository&gt;(sp =&gt;
{
    var flags = sp.GetRequiredService&lt;IOptions&lt;FeatureFlags&gt;&gt;().Value;
    return flags.UseEntityFramework
        ? sp.GetRequiredService&lt;EfOrderRepository&gt;()
        : sp.GetRequiredService&lt;DapperOrderRepository&gt;();
});
</code></pre>
<p>Every step is a small, mergeable commit to trunk. At no point is the codebase broken. You can release at any time. The old and new implementations coexist. Once the migration is complete and verified, you remove the old implementation, the flag, and optionally the abstraction layer.</p>
<p>Jez Humble describes how his team at ThoughtWorks used this technique to replace both an ORM (iBatis to Hibernate) and a web framework (Velocity/JsTemplate to Ruby on Rails) for the Go continuous delivery tool — all while continuing to release the application regularly.</p>
<h3 id="but-what-about-production-hotfixes">&quot;But What About Production Hotfixes?&quot;</h3>
<p>This is actually easier with trunk-based development, not harder.</p>
<p>In a Gitflow model, a hotfix requires: creating a branch from <code>main</code>, making the fix, merging back to <code>main</code>, tagging a release, and then merging back to <code>develop</code> (and possibly to every active release branch and feature branch). Miss a branch and you have a fix that is in production but not in development.</p>
<p>In trunk-based development: you make the fix on trunk (or a very short-lived branch that is merged to trunk within hours), and it deploys through your normal pipeline. There is only one branch, so there is no question of whether the fix is everywhere — it is.</p>
<p>If you need to patch an older release, you use release branches — but these are not long-lived development branches. They are cut from trunk at release time and receive only cherry-picked critical fixes. They are maintenance branches, not development branches.</p>
<pre><code class="language-bash"># Cut a release branch when ready to release
git checkout -b release/1.0 main
git tag v1.0.0

# Later, if a hotfix is needed:
# First, fix it on trunk
git checkout main
git commit -am &quot;Fix critical payment processing bug (#789)&quot;

# Then cherry-pick to the release branch
git checkout release/1.0
git cherry-pick abc1234
git tag v1.0.1
</code></pre>
<h3 id="were-not-google.we-cant-do-this">&quot;We're Not Google. We Can't Do This.&quot;</h3>
<p>This is a common reflexive objection, and it is backwards. Google has 35,000 developers working in a single monorepo trunk. If trunk-based development scales to that, it certainly scales to your team.</p>
<p>But more importantly, trunk-based development actually scales down better than Gitflow. A small team benefits enormously from the simplicity. You do not need to maintain multiple long-lived branches, you do not need complex merge strategies, and you do not need to understand a complicated branching model. There is one branch. Everyone commits to it. Done.</p>
<p>Netflix, Microsoft (for many products), Google, Facebook (Meta), Amazon, Etsy, and Flickr all practice trunk-based development at scale. Etsy famously deploys to production more than 50 times per day.</p>
<p>Thierry de Pauw documents that trunk-based development has been successfully adopted by highly regulated industries including healthcare, gambling, and finance. The objection that &quot;this cannot work for regulated industries&quot; or &quot;this cannot work for large systems&quot; has been empirically disproven.</p>
<h3 id="our-developers-are-not-ready-for-this">&quot;Our Developers Are Not Ready for This.&quot;</h3>
<p>In a long-lived-branch workflow, developer mistakes are hidden on isolated branches until merge day, when they become everyone's problem simultaneously. In trunk-based development, mistakes are caught immediately because CI runs on every commit and the whole team sees the changes within hours.</p>
<p>The trunk-based model is actually more forgiving, not less. If you break something, you find out in minutes (because CI caught it or a teammate noticed), not in weeks (because the branch finally merged). The blast radius of any single commit is small because commits are small.</p>
<p>The real question is not whether your developers are ready but whether you trust them. Thierry de Pauw makes a profound point: pull requests in a corporate setting essentially indicate that the team owns the codebase but is not allowed to contribute. This creates a low-trust environment. Trunk-based development, where everyone commits to trunk, creates a high-trust environment. It reduces fear and blame. It is the team that owns quality, not individuals.</p>
<h3 id="practical-steps-to-adopt-trunk-based-development">Practical Steps to Adopt Trunk-Based Development</h3>
<p>If you are currently using long-lived branches and want to migrate, do not try to change everything at once. Here is a gradual adoption path:</p>
<p><strong>Week 1–2: Shorten branch lifetimes.</strong> Adopt a team rule: no branch lives longer than two days. If your work takes longer than that, break it into smaller pieces. Use feature flags to hide incomplete work.</p>
<p><strong>Week 3–4: Improve CI.</strong> Your CI pipeline must be fast and reliable. If it takes 30 minutes to run, developers will avoid committing frequently. Aim for a pipeline that completes in under 10 minutes. Run unit tests on every commit. Run integration tests on every merge to trunk.</p>
<p><strong>Week 5–6: Add feature flags infrastructure.</strong> Start simple — configuration-based flags in <code>appsettings.json</code>. You do not need a commercial feature flag service. As your needs grow, consider tools like Microsoft.FeatureManagement (free, open source).</p>
<pre><code class="language-csharp">// Using Microsoft.FeatureManagement (MIT licensed, free)
// Install: dotnet add package Microsoft.FeatureManagement.AspNetCore

builder.Services.AddFeatureManagement();

// In appsettings.json:
{
  &quot;FeatureManagement&quot;: {
    &quot;NewDashboard&quot;: false,
    &quot;BetaSearch&quot;: true
  }
}

// In a controller or Razor page:
public class DashboardController : Controller
{
    private readonly IFeatureManager _features;

    public DashboardController(IFeatureManager features) =&gt;
        _features = features;

    public async Task&lt;IActionResult&gt; Index()
    {
        if (await _features.IsEnabledAsync(&quot;NewDashboard&quot;))
            return View(&quot;DashboardV2&quot;);
        else
            return View(&quot;Dashboard&quot;);
    }
}
</code></pre>
<p><strong>Week 7–8: Delete long-lived branches.</strong> Merge or close every branch that is more than a few days old. Going forward, all new work happens on trunk (or very short-lived branches from trunk).</p>
<p><strong>Ongoing: Build the muscle.</strong> Trunk-based development is a skill. It gets easier with practice. Developers learn to make smaller, more focused commits. They learn to think about how to decompose large features into small, independently deployable pieces. This is not just a version control technique — it is a design discipline that makes your software more modular and your team more effective.</p>
<h3 id="configuration-for-a-trunk-based.net-repository">Configuration for a Trunk-Based .NET Repository</h3>
<p>Here is how to configure a repository to enforce trunk-based practices:</p>
<pre><code class="language-yaml"># .github/workflows/ci.yml
name: CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build-and-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6

      - uses: actions/setup-dotnet@v5
        with:
          dotnet-version: '10.0.x'

      - name: Restore
        run: dotnet restore

      - name: Build
        run: dotnet build --no-restore

      - name: Test
        run: dotnet test --no-build --verbosity normal

      - name: Format check
        run: dotnet format --verify-no-changes
</code></pre>
<p>On GitHub, configure branch protection rules for <code>main</code>:</p>
<ul>
<li>Require pull request reviews before merging (1 reviewer is enough — keep it lightweight)</li>
<li>Require status checks to pass (CI must be green)</li>
<li>Require branches to be up to date before merging</li>
<li>Automatically delete head branches after merge</li>
</ul>
<p>These rules ensure quality without creating bottlenecks.</p>
<h2 id="part-5-git-configuration-reference">Part 5: Git Configuration Reference</h2>
<h3 id="useful-global-configuration">Useful Global Configuration</h3>
<pre><code class="language-bash"># Rebase by default when pulling (avoids unnecessary merge commits)
git config --global pull.rebase true

# Auto-stash before rebase (saves uncommitted work automatically)
git config --global rebase.autoStash true

# Always push the current branch
git config --global push.default current

# Show more context in diffs
git config --global diff.context 5

# Use histogram diff algorithm (better results for many code changes)
git config --global diff.algorithm histogram

# Remember conflict resolutions (if you resolve the same conflict twice, Git remembers)
git config --global rerere.enabled true

# Prune remote-tracking branches on fetch
git config --global fetch.prune true

# Sign commits with GPG (optional but recommended for open source)
git config --global commit.gpgsign true
git config --global user.signingkey YOUR_GPG_KEY_ID

# Better diff for C# files
git config --global diff.csharp.xfuncname &quot;^[ \t]*(((static|public|internal|private|protected|new|virtual|sealed|override|unsafe|async|partial)[ \t]+)*[][&lt;&gt;@.~_[:alnum:]]+[ \t]+[&lt;&gt;@._[:alnum:]]+[ \t]*\\(.*\\))[ \t]*[{;]?&quot;
</code></pre>
<h3 id="commit-message-convention">Commit Message Convention</h3>
<p>A good commit message convention improves readability and enables automated changelogs. The Conventional Commits specification is widely adopted:</p>
<pre><code>&lt;type&gt;[optional scope]: &lt;description&gt;

[optional body]

[optional footer(s)]
</code></pre>
<p>Types include <code>feat</code> (new feature), <code>fix</code> (bug fix), <code>docs</code> (documentation), <code>style</code> (formatting), <code>refactor</code>, <code>test</code>, <code>chore</code> (build system, CI), and <code>perf</code> (performance improvement).</p>
<p>Examples:</p>
<pre><code>feat(auth): add JWT refresh token rotation

Implements automatic refresh token rotation on each use.
Old refresh tokens are invalidated immediately.

Closes #142
</code></pre>
<pre><code>fix(checkout): prevent double-charge on retry

The payment service was not checking for idempotency keys
when a user retried a failed payment.
</code></pre>
<pre><code>chore(ci): add dotnet format check to PR pipeline
</code></pre>
<h2 id="part-6-summary-and-further-reading">Part 6: Summary and Further Reading</h2>
<p>Git is a powerful tool, but like any tool, how you use it matters more than which features it has. The branching model you choose profoundly affects your team's velocity, quality, and happiness.</p>
<p>The evidence from a decade of DORA research is clear: trunk-based development with continuous integration leads to higher performance on every metric that matters — speed, stability, and recovery. Long-lived branches create integration risk, delay feedback, and slow you down. Feature flags and branch by abstraction give you every capability that long-lived branches provide, without the cost.</p>
<p>You do not need to be Google to benefit. You just need to trust your team, invest in CI, and commit to small, frequent changes. The hardest part is the cultural shift. The technology is the easy part — you already have everything you need in Git.</p>
<h3 id="sources-and-further-reading">Sources and Further Reading</h3>
<ul>
<li>Forsgren, Nicole, Jez Humble, and Gene Kim. <em>Accelerate: The Science of Lean Software and DevOps.</em> IT Revolution Press, 2018. The foundational research text.</li>
<li>DORA Research Program. <a href="https://dora.dev/research">dora.dev/research</a>. Ongoing annual State of DevOps reports.</li>
<li>DORA Metrics Guide. <a href="https://dora.dev/guides/dora-metrics-four-keys/">dora.dev/guides/dora-metrics-four-keys</a>. Authoritative definitions of the five key metrics.</li>
<li>Humble, Jez, and David Farley. <em>Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.</em> Addison-Wesley, 2010.</li>
<li>Kim, Gene, Jez Humble, Patrick Debois, and John Willis. <em>The DevOps Handbook.</em> IT Revolution Press, 2016.</li>
<li>Hammant, Paul. <a href="https://trunkbaseddevelopment.com/">trunkbaseddevelopment.com</a>. The definitive reference site for trunk-based development practices and techniques.</li>
<li>Fowler, Martin. &quot;Branch by Abstraction.&quot; <a href="https://martinfowler.com/bliki/BranchByAbstraction.html">martinfowler.com/bliki/BranchByAbstraction.html</a>.</li>
<li>Fowler, Martin. &quot;Patterns for Managing Source Code Branches.&quot; <a href="https://martinfowler.com/articles/branching-patterns.html">martinfowler.com/articles/branching-patterns.html</a>. Comprehensive taxonomy of branching strategies.</li>
<li>Fowler, Martin. &quot;Continuous Integration.&quot; <a href="https://martinfowler.com/articles/continuousIntegration.html">martinfowler.com/articles/continuousIntegration.html</a>. Updated 2024 article on CI principles.</li>
<li>Hodgson, Pete. &quot;Feature Toggles (aka Feature Flags).&quot; <a href="https://martinfowler.com/articles/feature-toggles.html">martinfowler.com/articles/feature-toggles.html</a>. Comprehensive guide to feature flag categories and management.</li>
<li>de Pauw, Thierry. &quot;On the Benefits of Trunk-Based Development.&quot; <a href="https://thinkinglabs.io/articles/2025/07/21/on-the-benefits-of-trunk-based-development.html">thinkinglabs.io</a>. July 2025. A practitioner's summary of TBD benefits.</li>
<li>Atlassian. &quot;Trunk-Based Development.&quot; <a href="https://www.atlassian.com/continuous-delivery/continuous-integration/trunk-based-development">atlassian.com/continuous-delivery/continuous-integration/trunk-based-development</a>.</li>
<li>AWS Prescriptive Guidance. &quot;Advantages and Disadvantages of the Trunk Strategy.&quot; <a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/choosing-git-branch-approach/advantages-and-disadvantages-of-the-trunk-strategy.html">docs.aws.amazon.com</a>.</li>
<li>LaunchDarkly. &quot;Elite Performance with Trunk-Based Development.&quot; <a href="https://launchdarkly.com/blog/elite-performance-with-trunk-based-development/">launchdarkly.com</a>. Analysis of DORA data showing elite performers are 2.3x more likely to use TBD.</li>
<li>Toptal. &quot;Trunk-Based Development vs. Git Flow.&quot; <a href="https://www.toptal.com/software/trunk-based-development-git-flow">toptal.com</a>. Updated February 2026. Practical comparison with pros and cons.</li>
</ul>
]]></content:encoded>
      <category>git</category>
      <category>version-control</category>
      <category>trunk-based-development</category>
      <category>devops</category>
      <category>ci-cd</category>
      <category>best-practices</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Avalonia UI: The Complete Guide — From Hello World to Cross-Platform Mastery</title>
      <link>https://observermagazine.github.io/blog/avalonia-ui-comprehensive-guide</link>
      <description>Everything you need to know about Avalonia UI — what it is today, how to build desktop and mobile apps with AXAML and C#, why desktop and mobile need different layouts, what is coming in Avalonia 12, and the rendering revolution beyond. Packed with code examples.</description>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/avalonia-ui-comprehensive-guide</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="what-is-avalonia-ui">What Is Avalonia UI?</h2>
<p>If you have ever built a website with HTML and CSS, you already understand the core idea behind Avalonia UI: you write a declarative markup language that describes your user interface, and a runtime engine renders it on screen. The difference is that instead of running inside a web browser, Avalonia renders directly onto the operating system's graphics surface using a GPU-accelerated engine. Your application is a native binary — not a browser tab.</p>
<p>Avalonia is an open-source, MIT-licensed UI framework for .NET. It lets you write applications in C# (or F#) with a XAML-based markup language and deploy them to Windows, macOS, Linux, iOS, Android, WebAssembly, and even bare-metal embedded Linux devices. The core framework has been in development since 2013, when Steven Kirk created it as a spiritual successor to Windows Presentation Foundation (WPF) at a time when WPF appeared abandoned by Microsoft.</p>
<p>Today, Avalonia has over 30,000 stars on GitHub, more than 87 million NuGet downloads, and is used in production by companies including JetBrains (their Rider IDE uses Avalonia for parts of its UI), Unity, GitHub, Schneider Electric, and Devolutions. It is one of the most active .NET open-source projects in the ecosystem.</p>
<h3 id="why-not-just-use-a-web-browser">Why Not Just Use a Web Browser?</h3>
<p>You might wonder: if we already know HTML and CSS, why learn another UI framework? There are several compelling reasons.</p>
<p>First, native performance. A Blazor WebAssembly app (like this very website) runs inside a browser engine, which itself runs inside your operating system. Avalonia cuts out the middleman — your C# code compiles to native machine code, and the UI renders directly through GPU-accelerated pipelines. The result is dramatically faster startup, lower memory usage, and smoother animations.</p>
<p>Second, offline-first by default. Native applications do not need a web server. They work on airplanes, in basements, and in places without connectivity.</p>
<p>Third, platform integration. Native apps can access the file system, system tray, notifications, Bluetooth, USB devices, and other hardware that web applications cannot (or can only access through limited, permission-gated APIs).</p>
<p>Fourth, pixel-perfect consistency. Because Avalonia draws every pixel itself (rather than wrapping native platform controls), your application looks identical on every platform. There are no surprises when a button renders differently on Android versus iOS.</p>
<h3 id="how-avalonia-compares-to-other.net-ui-frameworks">How Avalonia Compares to Other .NET UI Frameworks</h3>
<p>There are several .NET UI frameworks competing for developer attention in 2026. Here is how they compare at a high level.</p>
<p><strong>WPF (Windows Presentation Foundation)</strong> is Microsoft's original XAML-based desktop framework. It is mature and powerful but only runs on Windows. If you know WPF, Avalonia will feel very familiar — the API is intentionally close to WPF, though it is not a 1:1 copy. Avalonia has improvements in its styling system, property system, and template model.</p>
<p><strong>.NET MAUI (Multi-platform App UI)</strong> is Microsoft's official cross-platform framework. Unlike Avalonia, MAUI wraps native platform controls — a Button on Android is an actual Android Button widget, while a Button on iOS is a UIButton. This means your app looks &quot;native&quot; on each platform, but it also means you are at the mercy of each platform's quirks. MAUI has struggled with adoption, bugs, and slow updates. In early 2026, developers reported significant regressions in the .NET 9 to .NET 10 transition.</p>
<p><strong>Uno Platform</strong> is another cross-platform option that targets UWP/WinUI APIs. It is capable but has a different design philosophy from Avalonia.</p>
<p><strong>Avalonia</strong> takes the &quot;drawn UI&quot; approach, similar to Flutter. It renders everything itself using SkiaSharp (the same Skia library that powers Chrome and Flutter), giving you complete control over every pixel. This approach provides more visual consistency across platforms at the cost of not looking &quot;native&quot; by default — though Avalonia ships with a Fluent theme that closely matches modern Windows/macOS aesthetics.</p>
<h2 id="getting-started-your-first-avalonia-application">Getting Started: Your First Avalonia Application</h2>
<h3 id="prerequisites">Prerequisites</h3>
<p>You need the .NET SDK installed. As of this writing, .NET 10 is the current LTS release. You can verify your installation:</p>
<pre><code class="language-bash">dotnet --version
# Should output something like 10.0.104
</code></pre>
<h3 id="installing-the-templates">Installing the Templates</h3>
<p>Avalonia provides project templates through the <code>dotnet new</code> system:</p>
<pre><code class="language-bash">dotnet new install Avalonia.Templates
</code></pre>
<p>This installs several templates. The one you will use most often is <code>avalonia.mvvm</code>, which sets up a project with the Model-View-ViewModel pattern:</p>
<pre><code class="language-bash">dotnet new avalonia.mvvm -o MyFirstAvaloniaApp
cd MyFirstAvaloniaApp
dotnet run
</code></pre>
<p>That is it. You should see a window appear with a greeting message. If you are on Linux, it works. If you are on macOS, it works. If you are on Windows, it works. Same code, same binary (well, same source — the binary is platform-specific).</p>
<h3 id="understanding-the-project-structure">Understanding the Project Structure</h3>
<p>After running the template, your project looks like this:</p>
<pre><code>MyFirstAvaloniaApp/
├── MyFirstAvaloniaApp.csproj
├── Program.cs
├── App.axaml
├── App.axaml.cs
├── ViewLocator.cs
├── ViewModels/
│   ├── ViewModelBase.cs
│   └── MainWindowViewModel.cs
├── Views/
│   ├── MainWindow.axaml
│   └── MainWindow.axaml.cs
└── Assets/
    └── avalonia-logo.ico
</code></pre>
<p>Notice the <code>.axaml</code> file extension. This stands for &quot;Avalonia XAML&quot; and is used instead of plain <code>.xaml</code> to avoid conflicts with WPF and UWP XAML files in IDE tooling. The syntax inside is nearly identical to WPF XAML, with some improvements.</p>
<h3 id="the-project-file">The Project File</h3>
<p>Your <code>.csproj</code> file targets .NET 10 and references the Avalonia NuGet packages:</p>
<pre><code class="language-xml">&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;

  &lt;PropertyGroup&gt;
    &lt;OutputType&gt;WinExe&lt;/OutputType&gt;
    &lt;TargetFramework&gt;net10.0&lt;/TargetFramework&gt;
    &lt;Nullable&gt;enable&lt;/Nullable&gt;
    &lt;BuiltInComInteropSupport&gt;true&lt;/BuiltInComInteropSupport&gt;
    &lt;ApplicationManifest&gt;app.manifest&lt;/ApplicationManifest&gt;
    &lt;AvaloniaUseCompiledBindingsByDefault&gt;true&lt;/AvaloniaUseCompiledBindingsByDefault&gt;
  &lt;/PropertyGroup&gt;

  &lt;ItemGroup&gt;
    &lt;PackageReference Include=&quot;Avalonia&quot; Version=&quot;11.3.0&quot; /&gt;
    &lt;PackageReference Include=&quot;Avalonia.Desktop&quot; Version=&quot;11.3.0&quot; /&gt;
    &lt;PackageReference Include=&quot;Avalonia.Themes.Fluent&quot; Version=&quot;11.3.0&quot; /&gt;
    &lt;PackageReference Include=&quot;Avalonia.Fonts.Inter&quot; Version=&quot;11.3.0&quot; /&gt;
    &lt;PackageReference Include=&quot;CommunityToolkit.Mvvm&quot; Version=&quot;8.4.0&quot; /&gt;

    &lt;!-- Condition below is used to add dependencies for previewer --&gt;
    &lt;PackageReference Include=&quot;Avalonia.Diagnostics&quot; Version=&quot;11.3.0&quot;
                      Condition=&quot;'$(Configuration)' == 'Debug'&quot; /&gt;
  &lt;/ItemGroup&gt;

&lt;/Project&gt;
</code></pre>
<p>The <code>AvaloniaUseCompiledBindingsByDefault</code> property is important — it tells the XAML compiler to use compiled bindings by default, which are faster than reflection-based bindings and catch errors at build time rather than runtime. In Avalonia 12, this becomes <code>true</code> by default even if you do not set it.</p>
<h3 id="program.cs-the-entry-point">Program.cs — The Entry Point</h3>
<pre><code class="language-csharp">using Avalonia;
using System;

namespace MyFirstAvaloniaApp;

sealed class Program
{
    // The entry point. Don't use any Avalonia, third-party APIs
    // or any SynchronizationContext-reliant code before AppMain
    // is called; things won't be initialized yet and stuff
    // might break.
    [STAThread]
    public static void Main(string[] args) =&gt;
        BuildAvaloniaApp()
            .StartWithClassicDesktopLifetime(args);

    // Avalonia configuration; also used by the visual designer.
    public static AppBuilder BuildAvaloniaApp() =&gt;
        AppBuilder.Configure&lt;App&gt;()
            .UsePlatformDetect()
            .WithInterFont()
            .LogToTrace();
}
</code></pre>
<p>This is conceptually similar to a web application's <code>Program.cs</code> where you configure services and middleware. Here you configure the Avalonia application builder. <code>UsePlatformDetect()</code> automatically selects the correct rendering backend for your operating system. <code>WithInterFont()</code> loads the Inter font family. <code>LogToTrace()</code> sends log output to <code>System.Diagnostics.Trace</code>.</p>
<h3 id="app.axaml-the-application-root">App.axaml — The Application Root</h3>
<pre><code class="language-xml">&lt;Application xmlns=&quot;https://github.com/avaloniaui&quot;
             xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;
             x:Class=&quot;MyFirstAvaloniaApp.App&quot;
             RequestedThemeVariant=&quot;Default&quot;&gt;
    &lt;!-- &quot;Default&quot; follows system theme; use &quot;Dark&quot; or &quot;Light&quot; to force --&gt;

    &lt;Application.DataTemplates&gt;
        &lt;local:ViewLocator /&gt;
    &lt;/Application.DataTemplates&gt;

    &lt;Application.Styles&gt;
        &lt;FluentTheme /&gt;
    &lt;/Application.Styles&gt;
&lt;/Application&gt;
</code></pre>
<p>Two namespace declarations are required in every AXAML file:</p>
<ul>
<li><code>xmlns=&quot;https://github.com/avaloniaui&quot;</code> — the Avalonia UI namespace (equivalent to the default HTML namespace)</li>
<li><code>xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;</code> — the XAML language namespace (for things like <code>x:Class</code>, <code>x:Name</code>, <code>x:Key</code>)</li>
</ul>
<p>The <code>&lt;FluentTheme /&gt;</code> element loads a modern Fluent Design theme that looks good on all platforms. Avalonia also ships with a &quot;Simple&quot; theme if you prefer a more minimal starting point.</p>
<h2 id="axaml-fundamentals-the-markup-language">AXAML Fundamentals: The Markup Language</h2>
<p>If you know HTML, AXAML will feel somewhat familiar. Both are XML-based markup languages for describing visual elements. But there are important conceptual differences.</p>
<h3 id="elements-are-controls">Elements Are Controls</h3>
<p>In HTML, a <code>&lt;div&gt;</code> is a generic container. In AXAML, every element maps to a specific .NET class. A <code>&lt;Button&gt;</code> is an instance of <code>Avalonia.Controls.Button</code>. A <code>&lt;TextBlock&gt;</code> is an instance of <code>Avalonia.Controls.TextBlock</code>. There is no generic &quot;div&quot; equivalent — instead, you use layout panels like <code>&lt;StackPanel&gt;</code>, <code>&lt;Grid&gt;</code>, <code>&lt;DockPanel&gt;</code>, and <code>&lt;WrapPanel&gt;</code>.</p>
<h3 id="a-simple-window">A Simple Window</h3>
<pre><code class="language-xml">&lt;Window xmlns=&quot;https://github.com/avaloniaui&quot;
        xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;
        x:Class=&quot;MyFirstAvaloniaApp.Views.MainWindow&quot;
        Title=&quot;My First Avalonia App&quot;
        Width=&quot;600&quot; Height=&quot;400&quot;&gt;

    &lt;StackPanel Margin=&quot;20&quot; Spacing=&quot;10&quot;&gt;
        &lt;TextBlock Text=&quot;Hello, Avalonia!&quot;
                   FontSize=&quot;24&quot;
                   FontWeight=&quot;Bold&quot; /&gt;

        &lt;TextBlock Text=&quot;This is a cross-platform .NET application.&quot;
                   Foreground=&quot;Gray&quot; /&gt;

        &lt;Button Content=&quot;Click Me&quot;
                HorizontalAlignment=&quot;Left&quot; /&gt;
    &lt;/StackPanel&gt;

&lt;/Window&gt;
</code></pre>
<p>Compare this to equivalent HTML:</p>
<pre><code class="language-html">&lt;div style=&quot;margin: 20px; display: flex; flex-direction: column; gap: 10px;&quot;&gt;
    &lt;h1 style=&quot;font-size: 24px; font-weight: bold;&quot;&gt;Hello, Avalonia!&lt;/h1&gt;
    &lt;p style=&quot;color: gray;&quot;&gt;This is a cross-platform .NET application.&lt;/p&gt;
    &lt;button&gt;Click Me&lt;/button&gt;
&lt;/div&gt;
</code></pre>
<p>The structure is similar, but AXAML uses attributes for properties (<code>FontSize=&quot;24&quot;</code>) instead of CSS. We will see later how Avalonia has its own styling system that separates style from structure, similar to how CSS works.</p>
<h3 id="data-binding-connecting-ui-to-code">Data Binding — Connecting UI to Code</h3>
<p>Data binding is the mechanism that connects your AXAML markup to your C# code. If you have used JavaScript frameworks like React or Vue, data binding is conceptually similar to reactive state — when the underlying data changes, the UI automatically updates.</p>
<p>Here is a simple example. First, the ViewModel (the C# code):</p>
<pre><code class="language-csharp">using CommunityToolkit.Mvvm.ComponentModel;
using CommunityToolkit.Mvvm.Input;

namespace MyFirstAvaloniaApp.ViewModels;

public partial class MainWindowViewModel : ViewModelBase
{
    [ObservableProperty]
    private string _greeting = &quot;Hello, Avalonia!&quot;;

    [ObservableProperty]
    private int _clickCount;

    [RelayCommand]
    private void IncrementCount()
    {
        ClickCount++;
        Greeting = $&quot;You clicked {ClickCount} time(s)!&quot;;
    }
}
</code></pre>
<p>The <code>[ObservableProperty]</code> attribute (from CommunityToolkit.Mvvm) is a source generator that automatically creates a public property with change notification. When <code>ClickCount</code> changes, any UI element bound to it automatically updates. The <code>[RelayCommand]</code> attribute generates an <code>ICommand</code> property that can be bound to a button.</p>
<p>Now, the AXAML that binds to this ViewModel:</p>
<pre><code class="language-xml">&lt;Window xmlns=&quot;https://github.com/avaloniaui&quot;
        xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;
        xmlns:vm=&quot;using:MyFirstAvaloniaApp.ViewModels&quot;
        x:Class=&quot;MyFirstAvaloniaApp.Views.MainWindow&quot;
        x:DataType=&quot;vm:MainWindowViewModel&quot;
        Title=&quot;My First Avalonia App&quot;
        Width=&quot;600&quot; Height=&quot;400&quot;&gt;

    &lt;Design.DataContext&gt;
        &lt;!-- Provides design-time data for the IDE previewer --&gt;
        &lt;vm:MainWindowViewModel /&gt;
    &lt;/Design.DataContext&gt;

    &lt;StackPanel Margin=&quot;20&quot; Spacing=&quot;10&quot;
                HorizontalAlignment=&quot;Center&quot;
                VerticalAlignment=&quot;Center&quot;&gt;

        &lt;TextBlock Text=&quot;{Binding Greeting}&quot;
                   FontSize=&quot;24&quot;
                   FontWeight=&quot;Bold&quot;
                   HorizontalAlignment=&quot;Center&quot; /&gt;

        &lt;TextBlock Text=&quot;{Binding ClickCount, StringFormat='Count: {0}'}&quot;
                   HorizontalAlignment=&quot;Center&quot;
                   Foreground=&quot;Gray&quot; /&gt;

        &lt;Button Content=&quot;Click Me&quot;
                Command=&quot;{Binding IncrementCountCommand}&quot;
                HorizontalAlignment=&quot;Center&quot; /&gt;
    &lt;/StackPanel&gt;

&lt;/Window&gt;
</code></pre>
<p>Key things to notice:</p>
<ul>
<li><code>xmlns:vm=&quot;using:MyFirstAvaloniaApp.ViewModels&quot;</code> declares a namespace prefix so we can reference our C# types in AXAML</li>
<li><code>x:DataType=&quot;vm:MainWindowViewModel&quot;</code> tells the compiled binding system what type to expect as the DataContext. This enables build-time validation of your bindings.</li>
<li><code>{Binding Greeting}</code> is a markup extension that binds the <code>Text</code> property to the <code>Greeting</code> property on the ViewModel</li>
<li><code>{Binding IncrementCountCommand}</code> binds the button's Command to the auto-generated command from <code>[RelayCommand]</code></li>
<li><code>&lt;Design.DataContext&gt;</code> provides a ViewModel instance for the IDE's live previewer — it does not affect runtime behavior</li>
</ul>
<h2 id="layout-system-panels-and-containers">Layout System: Panels and Containers</h2>
<p>Avalonia provides several layout panels, each with a different strategy for arranging child controls. If you are coming from CSS, think of these as pre-built <code>display</code> modes.</p>
<h3 id="stackpanel-flexbox-columnrow">StackPanel — Flexbox Column/Row</h3>
<p><code>StackPanel</code> arranges children in a single line, either vertically (default) or horizontally:</p>
<pre><code class="language-xml">&lt;!-- Vertical stack (like CSS flex-direction: column) --&gt;
&lt;StackPanel Spacing=&quot;10&quot;&gt;
    &lt;TextBlock Text=&quot;First&quot; /&gt;
    &lt;TextBlock Text=&quot;Second&quot; /&gt;
    &lt;TextBlock Text=&quot;Third&quot; /&gt;
&lt;/StackPanel&gt;

&lt;!-- Horizontal stack (like CSS flex-direction: row) --&gt;
&lt;StackPanel Orientation=&quot;Horizontal&quot; Spacing=&quot;10&quot;&gt;
    &lt;Button Content=&quot;One&quot; /&gt;
    &lt;Button Content=&quot;Two&quot; /&gt;
    &lt;Button Content=&quot;Three&quot; /&gt;
&lt;/StackPanel&gt;
</code></pre>
<h3 id="grid-css-grid-equivalent">Grid — CSS Grid Equivalent</h3>
<p><code>Grid</code> divides space into rows and columns. This is the most powerful and commonly used layout panel:</p>
<pre><code class="language-xml">&lt;Grid RowDefinitions=&quot;Auto,*,Auto&quot;
      ColumnDefinitions=&quot;200,*&quot;
      Margin=&quot;10&quot;&gt;

    &lt;!-- Header spanning both columns --&gt;
    &lt;TextBlock Grid.Row=&quot;0&quot; Grid.ColumnSpan=&quot;2&quot;
               Text=&quot;Application Header&quot;
               FontSize=&quot;20&quot; FontWeight=&quot;Bold&quot;
               Margin=&quot;0,0,0,10&quot; /&gt;

    &lt;!-- Sidebar --&gt;
    &lt;ListBox Grid.Row=&quot;1&quot; Grid.Column=&quot;0&quot;
             Margin=&quot;0,0,10,0&quot;&gt;
        &lt;ListBoxItem Content=&quot;Dashboard&quot; /&gt;
        &lt;ListBoxItem Content=&quot;Settings&quot; /&gt;
        &lt;ListBoxItem Content=&quot;Profile&quot; /&gt;
    &lt;/ListBox&gt;

    &lt;!-- Main content area --&gt;
    &lt;Border Grid.Row=&quot;1&quot; Grid.Column=&quot;1&quot;
            Background=&quot;#f0f0f0&quot;
            CornerRadius=&quot;8&quot;
            Padding=&quot;20&quot;&gt;
        &lt;TextBlock Text=&quot;Main content goes here&quot;
                   VerticalAlignment=&quot;Center&quot;
                   HorizontalAlignment=&quot;Center&quot; /&gt;
    &lt;/Border&gt;

    &lt;!-- Footer spanning both columns --&gt;
    &lt;TextBlock Grid.Row=&quot;2&quot; Grid.ColumnSpan=&quot;2&quot;
               Text=&quot;© 2026 My App&quot;
               HorizontalAlignment=&quot;Center&quot;
               Margin=&quot;0,10,0,0&quot;
               Foreground=&quot;Gray&quot; /&gt;
&lt;/Grid&gt;
</code></pre>
<p>Row and column definitions use a size syntax:</p>
<ul>
<li><code>Auto</code> — sizes to fit content (like CSS <code>auto</code>)</li>
<li><code>*</code> — takes remaining space proportionally (like CSS <code>1fr</code>)</li>
<li><code>2*</code> — takes twice the remaining space (like CSS <code>2fr</code>)</li>
<li><code>200</code> — fixed pixel size</li>
</ul>
<h3 id="dockpanel-edge-docking">DockPanel — Edge Docking</h3>
<p><code>DockPanel</code> docks children to the edges of the container. The last child fills the remaining space:</p>
<pre><code class="language-xml">&lt;DockPanel&gt;
    &lt;!-- Top toolbar --&gt;
    &lt;Menu DockPanel.Dock=&quot;Top&quot;&gt;
        &lt;MenuItem Header=&quot;File&quot;&gt;
            &lt;MenuItem Header=&quot;Open&quot; /&gt;
            &lt;MenuItem Header=&quot;Save&quot; /&gt;
            &lt;Separator /&gt;
            &lt;MenuItem Header=&quot;Exit&quot; /&gt;
        &lt;/MenuItem&gt;
        &lt;MenuItem Header=&quot;Edit&quot;&gt;
            &lt;MenuItem Header=&quot;Undo&quot; /&gt;
            &lt;MenuItem Header=&quot;Redo&quot; /&gt;
        &lt;/MenuItem&gt;
    &lt;/Menu&gt;

    &lt;!-- Bottom status bar --&gt;
    &lt;Border DockPanel.Dock=&quot;Bottom&quot;
            Background=&quot;#e0e0e0&quot; Padding=&quot;5&quot;&gt;
        &lt;TextBlock Text=&quot;Ready&quot; FontSize=&quot;12&quot; /&gt;
    &lt;/Border&gt;

    &lt;!-- Left sidebar --&gt;
    &lt;Border DockPanel.Dock=&quot;Left&quot;
            Width=&quot;200&quot; Background=&quot;#f5f5f5&quot;
            Padding=&quot;10&quot;&gt;
        &lt;TextBlock Text=&quot;Navigation&quot; /&gt;
    &lt;/Border&gt;

    &lt;!-- Remaining space = main content --&gt;
    &lt;Border Padding=&quot;20&quot;&gt;
        &lt;TextBlock Text=&quot;Main Content Area&quot; /&gt;
    &lt;/Border&gt;
&lt;/DockPanel&gt;
</code></pre>
<h3 id="wrappanel-flex-wrap">WrapPanel — Flex Wrap</h3>
<p><code>WrapPanel</code> arranges children left to right, wrapping to the next line when space runs out:</p>
<pre><code class="language-xml">&lt;WrapPanel Orientation=&quot;Horizontal&quot;&gt;
    &lt;Button Content=&quot;Tag 1&quot; Margin=&quot;4&quot; /&gt;
    &lt;Button Content=&quot;Tag 2&quot; Margin=&quot;4&quot; /&gt;
    &lt;Button Content=&quot;Tag 3&quot; Margin=&quot;4&quot; /&gt;
    &lt;Button Content=&quot;Long Tag Name&quot; Margin=&quot;4&quot; /&gt;
    &lt;Button Content=&quot;Another&quot; Margin=&quot;4&quot; /&gt;
    &lt;!-- These will wrap to the next line if the container is too narrow --&gt;
&lt;/WrapPanel&gt;
</code></pre>
<h3 id="uniformgrid-equal-size-grid">UniformGrid — Equal-Size Grid</h3>
<p><code>UniformGrid</code> creates a grid where every cell is the same size:</p>
<pre><code class="language-xml">&lt;UniformGrid Columns=&quot;3&quot; Rows=&quot;2&quot;&gt;
    &lt;Button Content=&quot;1&quot; /&gt;
    &lt;Button Content=&quot;2&quot; /&gt;
    &lt;Button Content=&quot;3&quot; /&gt;
    &lt;Button Content=&quot;4&quot; /&gt;
    &lt;Button Content=&quot;5&quot; /&gt;
    &lt;Button Content=&quot;6&quot; /&gt;
&lt;/UniformGrid&gt;
</code></pre>
<h2 id="styling-avalonias-css-like-system">Styling: Avalonia's CSS-Like System</h2>
<p>Avalonia has a styling system that is conceptually closer to CSS than WPF's styling. Styles use selectors (similar to CSS selectors) to target controls.</p>
<h3 id="basic-styles">Basic Styles</h3>
<pre><code class="language-xml">&lt;Window.Styles&gt;
    &lt;!-- Target all TextBlocks --&gt;
    &lt;Style Selector=&quot;TextBlock&quot;&gt;
        &lt;Setter Property=&quot;FontFamily&quot; Value=&quot;Inter&quot; /&gt;
        &lt;Setter Property=&quot;FontSize&quot; Value=&quot;14&quot; /&gt;
    &lt;/Style&gt;

    &lt;!-- Target buttons with the &quot;primary&quot; class --&gt;
    &lt;Style Selector=&quot;Button.primary&quot;&gt;
        &lt;Setter Property=&quot;Background&quot; Value=&quot;#0078d4&quot; /&gt;
        &lt;Setter Property=&quot;Foreground&quot; Value=&quot;White&quot; /&gt;
        &lt;Setter Property=&quot;CornerRadius&quot; Value=&quot;4&quot; /&gt;
        &lt;Setter Property=&quot;Padding&quot; Value=&quot;16,8&quot; /&gt;
    &lt;/Style&gt;

    &lt;!-- Hover state (like CSS :hover) --&gt;
    &lt;Style Selector=&quot;Button.primary:pointerover /template/ ContentPresenter&quot;&gt;
        &lt;Setter Property=&quot;Background&quot; Value=&quot;#106ebe&quot; /&gt;
    &lt;/Style&gt;

    &lt;!-- Target by name (like CSS #id) --&gt;
    &lt;Style Selector=&quot;TextBlock#PageTitle&quot;&gt;
        &lt;Setter Property=&quot;FontSize&quot; Value=&quot;28&quot; /&gt;
        &lt;Setter Property=&quot;FontWeight&quot; Value=&quot;Bold&quot; /&gt;
    &lt;/Style&gt;
&lt;/Window.Styles&gt;

&lt;!-- Usage --&gt;
&lt;StackPanel&gt;
    &lt;TextBlock x:Name=&quot;PageTitle&quot; Text=&quot;Dashboard&quot; /&gt;
    &lt;Button Classes=&quot;primary&quot; Content=&quot;Save Changes&quot; /&gt;
    &lt;Button Content=&quot;Cancel&quot; /&gt;
&lt;/StackPanel&gt;
</code></pre>
<p>Notice the CSS-like selector syntax:</p>
<ul>
<li><code>TextBlock</code> — targets all TextBlock controls (like CSS element selectors)</li>
<li><code>Button.primary</code> — targets Buttons with the &quot;primary&quot; class (like CSS <code>.primary</code>)</li>
<li><code>TextBlock#PageTitle</code> — targets by name (like CSS <code>#id</code>)</li>
<li><code>:pointerover</code> — pseudo-class for mouse hover (like CSS <code>:hover</code>)</li>
<li><code>/template/</code> — navigates into a control's template (unique to Avalonia)</li>
</ul>
<h3 id="styles-in-external-files">Styles in External Files</h3>
<p>Just like CSS can be in external files, Avalonia styles can live in separate <code>.axaml</code> files:</p>
<pre><code class="language-xml">&lt;!-- Styles/AppStyles.axaml --&gt;
&lt;Styles xmlns=&quot;https://github.com/avaloniaui&quot;
        xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;&gt;

    &lt;Style Selector=&quot;Button.danger&quot;&gt;
        &lt;Setter Property=&quot;Background&quot; Value=&quot;#dc2626&quot; /&gt;
        &lt;Setter Property=&quot;Foreground&quot; Value=&quot;White&quot; /&gt;
    &lt;/Style&gt;

    &lt;Style Selector=&quot;Button.danger:pointerover /template/ ContentPresenter&quot;&gt;
        &lt;Setter Property=&quot;Background&quot; Value=&quot;#b91c1c&quot; /&gt;
    &lt;/Style&gt;

&lt;/Styles&gt;
</code></pre>
<p>Then include it in your <code>App.axaml</code>:</p>
<pre><code class="language-xml">&lt;Application.Styles&gt;
    &lt;FluentTheme /&gt;
    &lt;StyleInclude Source=&quot;/Styles/AppStyles.axaml&quot; /&gt;
&lt;/Application.Styles&gt;
</code></pre>
<h2 id="the-mvvm-pattern-separating-concerns">The MVVM Pattern: Separating Concerns</h2>
<p>MVVM (Model-View-ViewModel) is the standard architecture pattern for Avalonia applications. It is analogous to MVC in web development but tailored for data-binding UI frameworks.</p>
<ul>
<li><strong>Model</strong> — your domain objects and business logic (like your database entities and services in a web app)</li>
<li><strong>View</strong> — the AXAML markup and code-behind (like your Razor/HTML templates)</li>
<li><strong>ViewModel</strong> — the intermediary that exposes data and commands to the View (like a page model or controller)</li>
</ul>
<h3 id="a-complete-mvvm-example-todo-list">A Complete MVVM Example: Todo List</h3>
<p>Here is a full example of a todo list application demonstrating MVVM:</p>
<p><strong>Model:</strong></p>
<pre><code class="language-csharp">namespace MyApp.Models;

public class TodoItem
{
    public string Title { get; set; } = &quot;&quot;;
    public bool IsCompleted { get; set; }
}
</code></pre>
<p><strong>ViewModel:</strong></p>
<pre><code class="language-csharp">using System.Collections.ObjectModel;
using CommunityToolkit.Mvvm.ComponentModel;
using CommunityToolkit.Mvvm.Input;
using MyApp.Models;

namespace MyApp.ViewModels;

public partial class TodoViewModel : ViewModelBase
{
    [ObservableProperty]
    private string _newItemTitle = &quot;&quot;;

    public ObservableCollection&lt;TodoItem&gt; Items { get; } = new()
    {
        new TodoItem { Title = &quot;Learn Avalonia&quot;, IsCompleted = false },
        new TodoItem { Title = &quot;Build an app&quot;, IsCompleted = false },
        new TodoItem { Title = &quot;Deploy everywhere&quot;, IsCompleted = false }
    };

    [RelayCommand(CanExecute = nameof(CanAddItem))]
    private void AddItem()
    {
        Items.Add(new TodoItem { Title = NewItemTitle });
        NewItemTitle = &quot;&quot;;
    }

    private bool CanAddItem() =&gt;
        !string.IsNullOrWhiteSpace(NewItemTitle);

    // The source generator knows to re-evaluate CanAddItem
    // when NewItemTitle changes because of this attribute:
    partial void OnNewItemTitleChanged(string value) =&gt;
        AddItemCommand.NotifyCanExecuteChanged();

    [RelayCommand]
    private void RemoveItem(TodoItem item) =&gt;
        Items.Remove(item);

    [RelayCommand]
    private void ToggleItem(TodoItem item) =&gt;
        item.IsCompleted = !item.IsCompleted;
}
</code></pre>
<p><strong>View (AXAML):</strong></p>
<pre><code class="language-xml">&lt;UserControl xmlns=&quot;https://github.com/avaloniaui&quot;
             xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;
             xmlns:vm=&quot;using:MyApp.ViewModels&quot;
             xmlns:m=&quot;using:MyApp.Models&quot;
             x:Class=&quot;MyApp.Views.TodoView&quot;
             x:DataType=&quot;vm:TodoViewModel&quot;&gt;

    &lt;DockPanel Margin=&quot;20&quot;&gt;
        &lt;!-- Header --&gt;
        &lt;TextBlock DockPanel.Dock=&quot;Top&quot;
                   Text=&quot;Todo List&quot;
                   FontSize=&quot;24&quot; FontWeight=&quot;Bold&quot;
                   Margin=&quot;0,0,0,16&quot; /&gt;

        &lt;!-- Input area --&gt;
        &lt;Grid DockPanel.Dock=&quot;Top&quot;
              ColumnDefinitions=&quot;*,Auto&quot;
              Margin=&quot;0,0,0,16&quot;&gt;
            &lt;TextBox Grid.Column=&quot;0&quot;
                     Text=&quot;{Binding NewItemTitle}&quot;
                     Watermark=&quot;What needs to be done?&quot;
                     Margin=&quot;0,0,8,0&quot; /&gt;
            &lt;Button Grid.Column=&quot;1&quot;
                    Content=&quot;Add&quot;
                    Command=&quot;{Binding AddItemCommand}&quot;
                    Classes=&quot;primary&quot; /&gt;
        &lt;/Grid&gt;

        &lt;!-- Todo list --&gt;
        &lt;ListBox ItemsSource=&quot;{Binding Items}&quot;
                 x:DataType=&quot;vm:TodoViewModel&quot;&gt;
            &lt;ListBox.ItemTemplate&gt;
                &lt;DataTemplate x:DataType=&quot;m:TodoItem&quot;&gt;
                    &lt;Grid ColumnDefinitions=&quot;Auto,*,Auto&quot;&gt;
                        &lt;CheckBox Grid.Column=&quot;0&quot;
                                  IsChecked=&quot;{Binding IsCompleted}&quot;
                                  Margin=&quot;0,0,8,0&quot; /&gt;
                        &lt;TextBlock Grid.Column=&quot;1&quot;
                                   Text=&quot;{Binding Title}&quot;
                                   VerticalAlignment=&quot;Center&quot; /&gt;
                        &lt;Button Grid.Column=&quot;2&quot;
                                Content=&quot;✕&quot;
                                Command=&quot;{Binding
                                    $parent[ListBox].((vm:TodoViewModel)DataContext).RemoveItemCommand}&quot;
                                CommandParameter=&quot;{Binding}&quot;
                                Classes=&quot;danger&quot;
                                Padding=&quot;4,2&quot; /&gt;
                    &lt;/Grid&gt;
                &lt;/DataTemplate&gt;
            &lt;/ListBox.ItemTemplate&gt;
        &lt;/ListBox&gt;
    &lt;/DockPanel&gt;

&lt;/UserControl&gt;
</code></pre>
<p>Notice the <code>$parent[ListBox]</code> syntax in the Remove button's command binding. This navigates up the visual tree to find the ListBox, then accesses its DataContext (the TodoViewModel). This is how you reach the parent ViewModel from within an <code>ItemTemplate</code>. In HTML/JavaScript terms, this is similar to how you might call a parent component's method from a child component in React.</p>
<h2 id="desktop-vs.mobile-why-you-need-different-layouts">Desktop vs. Mobile: Why You Need Different Layouts</h2>
<p>This is one of the most important sections of this article. If you are coming from web development, you are accustomed to responsive design — writing one set of HTML and CSS that adapts to different screen sizes using media queries. Avalonia can do something similar, but there are fundamental differences between desktop and mobile that go beyond screen size.</p>
<h3 id="the-core-differences">The Core Differences</h3>
<p><strong>Input model.</strong> Desktop users have a mouse with hover states, right-click context menus, precise cursor positioning, and keyboard shortcuts. Mobile users have touch with tap, swipe, pinch-to-zoom, and no hover state. A button that is 24 pixels wide works fine with a mouse cursor but is impossibly small for a human finger.</p>
<p><strong>Screen real estate.</strong> A desktop monitor might be 1920×1080 or larger. A phone screen is typically 360-430 points wide in portrait mode. You simply cannot show the same information density on both.</p>
<p><strong>Navigation paradigm.</strong> Desktop apps typically use menus, toolbars, and side panels that are always visible. Mobile apps use bottom navigation bars, hamburger menus, and full-screen page transitions where only one &quot;page&quot; is visible at a time.</p>
<p><strong>Safe areas.</strong> Mobile devices have notches, rounded corners, and system gesture zones that your content must avoid. Desktop windows do not have these constraints.</p>
<p><strong>Platform conventions.</strong> iOS users expect a bottom tab bar and back-swipe navigation. Android users expect a top app bar with a back button. Desktop users expect a menu bar and keyboard shortcuts. Violating these conventions makes your app feel foreign.</p>
<h3 id="strategy-1-platform-specific-styles-with-onplatform">Strategy 1: Platform-Specific Styles with OnPlatform</h3>
<p>Avalonia provides the <code>OnPlatform</code> markup extension that works like a compile-time switch statement. The compiler generates branches for all platforms, but only the matching branch executes at runtime:</p>
<pre><code class="language-xml">&lt;TextBlock Text=&quot;{OnPlatform Default='Hello!',
                              Android='Hello from Android!',
                              iOS='Hello from iPhone!'}&quot; /&gt;
</code></pre>
<p>You can use this for any property, not just strings:</p>
<pre><code class="language-xml">&lt;Button Padding=&quot;{OnPlatform '8,4', Android='16,12', iOS='16,12'}&quot;
        FontSize=&quot;{OnPlatform 14, Android=16, iOS=16}&quot;
        CornerRadius=&quot;{OnPlatform 4, iOS=20}&quot; /&gt;
</code></pre>
<p>More powerfully, you can load entirely different style sheets per platform:</p>
<pre><code class="language-xml">&lt;!-- In App.axaml --&gt;
&lt;Application.Styles&gt;
    &lt;FluentTheme /&gt;

    &lt;OnPlatform&gt;
        &lt;On Options=&quot;Android, iOS&quot;&gt;
            &lt;StyleInclude Source=&quot;/Styles/Mobile.axaml&quot; /&gt;
        &lt;/On&gt;
        &lt;On Options=&quot;Default&quot;&gt;
            &lt;StyleInclude Source=&quot;/Styles/Desktop.axaml&quot; /&gt;
        &lt;/On&gt;
    &lt;/OnPlatform&gt;
&lt;/Application.Styles&gt;
</code></pre>
<h3 id="strategy-2-form-factor-detection-with-onformfactor">Strategy 2: Form Factor Detection with OnFormFactor</h3>
<p><code>OnFormFactor</code> distinguishes between Desktop and Mobile form factors at runtime:</p>
<pre><code class="language-xml">&lt;TextBlock Text=&quot;{OnFormFactor 'Desktop mode', Mobile='Mobile mode'}&quot; /&gt;

&lt;!-- Different margins for different form factors --&gt;
&lt;StackPanel Margin=&quot;{OnFormFactor '20', Mobile='12'}&quot;&gt;
    &lt;!-- content --&gt;
&lt;/StackPanel&gt;
</code></pre>
<h3 id="strategy-3-container-queries-introduced-in-avalonia-11.3">Strategy 3: Container Queries (Introduced in Avalonia 11.3)</h3>
<p>This is the most exciting responsive design feature in Avalonia. Container Queries work similarly to CSS Container Queries — instead of checking the viewport size, you check the size of a specific container control. This lets you build truly reusable components that adapt to the space available to them, regardless of the overall screen size.</p>
<p>Here is a practical example — a product card that switches between horizontal and vertical layouts:</p>
<pre><code class="language-xml">&lt;Border x:Name=&quot;CardContainer&quot;
        Container.Name=&quot;card&quot;
        Container.Sizing=&quot;Width&quot;&gt;

    &lt;Border.Styles&gt;
        &lt;!-- Vertical (narrow) layout --&gt;
        &lt;ContainerQuery Name=&quot;card&quot; Query=&quot;max-width:400&quot;&gt;
            &lt;Style Selector=&quot;StackPanel#CardContent&quot;&gt;
                &lt;Setter Property=&quot;Orientation&quot; Value=&quot;Vertical&quot; /&gt;
            &lt;/Style&gt;
            &lt;Style Selector=&quot;Image#ProductImage&quot;&gt;
                &lt;Setter Property=&quot;Width&quot; Value=&quot;NaN&quot; /&gt;
                &lt;Setter Property=&quot;Height&quot; Value=&quot;200&quot; /&gt;
            &lt;/Style&gt;
        &lt;/ContainerQuery&gt;

        &lt;!-- Horizontal (wide) layout --&gt;
        &lt;ContainerQuery Name=&quot;card&quot; Query=&quot;min-width:400&quot;&gt;
            &lt;Style Selector=&quot;StackPanel#CardContent&quot;&gt;
                &lt;Setter Property=&quot;Orientation&quot; Value=&quot;Horizontal&quot; /&gt;
            &lt;/Style&gt;
            &lt;Style Selector=&quot;Image#ProductImage&quot;&gt;
                &lt;Setter Property=&quot;Width&quot; Value=&quot;200&quot; /&gt;
                &lt;Setter Property=&quot;Height&quot; Value=&quot;NaN&quot; /&gt;
            &lt;/Style&gt;
        &lt;/ContainerQuery&gt;
    &lt;/Border.Styles&gt;

    &lt;StackPanel x:Name=&quot;CardContent&quot; Spacing=&quot;12&quot;&gt;
        &lt;Image x:Name=&quot;ProductImage&quot;
               Source=&quot;/Assets/product.jpg&quot;
               Stretch=&quot;UniformToFill&quot; /&gt;
        &lt;StackPanel Spacing=&quot;4&quot; VerticalAlignment=&quot;Center&quot;&gt;
            &lt;TextBlock Text=&quot;Product Name&quot; FontWeight=&quot;Bold&quot; /&gt;
            &lt;TextBlock Text=&quot;$29.99&quot; Foreground=&quot;Green&quot; /&gt;
            &lt;TextBlock Text=&quot;A great product description...&quot;
                       TextWrapping=&quot;Wrap&quot; /&gt;
        &lt;/StackPanel&gt;
    &lt;/StackPanel&gt;
&lt;/Border&gt;
</code></pre>
<p>You can combine multiple conditions with <code>and</code> for AND logic and <code>,</code> for OR logic:</p>
<pre><code class="language-xml">&lt;!-- Both width and height conditions must be met --&gt;
&lt;ContainerQuery Name=&quot;panel&quot; Query=&quot;min-width:600 and min-height:400&quot;&gt;
    &lt;Style Selector=&quot;UniformGrid#ContentGrid&quot;&gt;
        &lt;Setter Property=&quot;Columns&quot; Value=&quot;3&quot; /&gt;
    &lt;/Style&gt;
&lt;/ContainerQuery&gt;

&lt;!-- Either condition triggers the styles --&gt;
&lt;ContainerQuery Name=&quot;panel&quot; Query=&quot;max-width:300, max-height:200&quot;&gt;
    &lt;Style Selector=&quot;UniformGrid#ContentGrid&quot;&gt;
        &lt;Setter Property=&quot;Columns&quot; Value=&quot;1&quot; /&gt;
    &lt;/Style&gt;
&lt;/ContainerQuery&gt;
</code></pre>
<p>Important rules for Container Queries:</p>
<ol>
<li>You must declare a control as a container by setting <code>Container.Name</code> and <code>Container.Sizing</code> on it</li>
<li>Styles inside a ContainerQuery cannot affect the container itself or its ancestors (this prevents infinite layout loops)</li>
<li>ContainerQuery elements must be direct children of a control's <code>Styles</code> property — they cannot be nested inside other <code>Style</code> elements</li>
</ol>
<h3 id="strategy-4-completely-separate-views">Strategy 4: Completely Separate Views</h3>
<p>For maximum control, you can use entirely different AXAML files for desktop and mobile. This is the approach many production applications take:</p>
<pre><code>Views/
├── Desktop/
│   ├── MainView.axaml
│   ├── SettingsView.axaml
│   └── DetailView.axaml
├── Mobile/
│   ├── MainView.axaml
│   ├── SettingsView.axaml
│   └── DetailView.axaml
└── Shared/
    ├── ProductCard.axaml
    └── LoadingSpinner.axaml
</code></pre>
<p>You then use a view locator or conditional logic in your App to load the correct views:</p>
<pre><code class="language-csharp">// In your ViewLocator or App setup
public Control Build(object? data)
{
    if (data is null) return new TextBlock { Text = &quot;No data&quot; };

    var isMobile = OperatingSystem.IsAndroid() ||
                   OperatingSystem.IsIOS();

    var name = data.GetType().FullName!
        .Replace(&quot;ViewModel&quot;, &quot;View&quot;);

    // Insert platform folder
    var platformFolder = isMobile ? &quot;Mobile&quot; : &quot;Desktop&quot;;
    name = name.Replace(&quot;.Views.&quot;, $&quot;.Views.{platformFolder}.&quot;);

    var type = Type.GetType(name);

    if (type is not null)
        return (Control)Activator.CreateInstance(type)!;

    return new TextBlock { Text = $&quot;View not found: {name}&quot; };
}
</code></pre>
<h3 id="practical-example-master-detail-on-desktop-vs.mobile">Practical Example: Master-Detail on Desktop vs. Mobile</h3>
<p>Here is a concrete example showing how the same feature (a contacts list with detail view) needs fundamentally different UI on desktop versus mobile.</p>
<p><strong>Desktop Version</strong> — side-by-side layout with the list always visible:</p>
<pre><code class="language-xml">&lt;!-- Views/Desktop/ContactsView.axaml --&gt;
&lt;UserControl xmlns=&quot;https://github.com/avaloniaui&quot;
             xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;
             xmlns:vm=&quot;using:MyApp.ViewModels&quot;
             x:Class=&quot;MyApp.Views.Desktop.ContactsView&quot;
             x:DataType=&quot;vm:ContactsViewModel&quot;&gt;

    &lt;Grid ColumnDefinitions=&quot;300,*&quot;&gt;
        &lt;!-- Left: always-visible contact list --&gt;
        &lt;Border Grid.Column=&quot;0&quot;
                BorderBrush=&quot;#e0e0e0&quot;
                BorderThickness=&quot;0,0,1,0&quot;&gt;
            &lt;DockPanel&gt;
                &lt;TextBox DockPanel.Dock=&quot;Top&quot;
                         Text=&quot;{Binding SearchText}&quot;
                         Watermark=&quot;Search contacts...&quot;
                         Margin=&quot;8&quot; /&gt;

                &lt;ListBox ItemsSource=&quot;{Binding FilteredContacts}&quot;
                         SelectedItem=&quot;{Binding SelectedContact}&quot;&gt;
                    &lt;ListBox.ItemTemplate&gt;
                        &lt;DataTemplate&gt;
                            &lt;StackPanel Orientation=&quot;Horizontal&quot;
                                        Spacing=&quot;8&quot; Margin=&quot;4&quot;&gt;
                                &lt;Ellipse Width=&quot;32&quot; Height=&quot;32&quot;
                                         Fill=&quot;#0078d4&quot; /&gt;
                                &lt;StackPanel VerticalAlignment=&quot;Center&quot;&gt;
                                    &lt;TextBlock Text=&quot;{Binding Name}&quot;
                                               FontWeight=&quot;SemiBold&quot; /&gt;
                                    &lt;TextBlock Text=&quot;{Binding Email}&quot;
                                               FontSize=&quot;12&quot;
                                               Foreground=&quot;Gray&quot; /&gt;
                                &lt;/StackPanel&gt;
                            &lt;/StackPanel&gt;
                        &lt;/DataTemplate&gt;
                    &lt;/ListBox.ItemTemplate&gt;
                &lt;/ListBox&gt;
            &lt;/DockPanel&gt;
        &lt;/Border&gt;

        &lt;!-- Right: detail panel --&gt;
        &lt;ScrollViewer Grid.Column=&quot;1&quot; Padding=&quot;20&quot;&gt;
            &lt;StackPanel Spacing=&quot;12&quot;
                        IsVisible=&quot;{Binding SelectedContact,
                            Converter={x:Static ObjectConverters.IsNotNull}}&quot;&gt;
                &lt;TextBlock Text=&quot;{Binding SelectedContact.Name}&quot;
                           FontSize=&quot;28&quot; FontWeight=&quot;Bold&quot; /&gt;
                &lt;TextBlock Text=&quot;{Binding SelectedContact.Email}&quot; /&gt;
                &lt;TextBlock Text=&quot;{Binding SelectedContact.Phone}&quot; /&gt;
                &lt;TextBlock Text=&quot;{Binding SelectedContact.Notes}&quot;
                           TextWrapping=&quot;Wrap&quot; /&gt;
            &lt;/StackPanel&gt;
        &lt;/ScrollViewer&gt;
    &lt;/Grid&gt;

&lt;/UserControl&gt;
</code></pre>
<p><strong>Mobile Version</strong> — full-screen list that pushes to a full-screen detail:</p>
<pre><code class="language-xml">&lt;!-- Views/Mobile/ContactsView.axaml --&gt;
&lt;UserControl xmlns=&quot;https://github.com/avaloniaui&quot;
             xmlns:x=&quot;http://schemas.microsoft.com/winfx/2006/xaml&quot;
             xmlns:vm=&quot;using:MyApp.ViewModels&quot;
             x:Class=&quot;MyApp.Views.Mobile.ContactsView&quot;
             x:DataType=&quot;vm:ContactsViewModel&quot;&gt;

    &lt;Panel&gt;
        &lt;!-- Contact list (full screen) --&gt;
        &lt;DockPanel IsVisible=&quot;{Binding !IsDetailVisible}&quot;&gt;
            &lt;TextBox DockPanel.Dock=&quot;Top&quot;
                     Text=&quot;{Binding SearchText}&quot;
                     Watermark=&quot;Search contacts...&quot;
                     Margin=&quot;12&quot;
                     Padding=&quot;16,12&quot;
                     FontSize=&quot;16&quot; /&gt;

            &lt;ListBox ItemsSource=&quot;{Binding FilteredContacts}&quot;
                     SelectedItem=&quot;{Binding SelectedContact}&quot;&gt;
                &lt;ListBox.ItemTemplate&gt;
                    &lt;DataTemplate&gt;
                        &lt;!-- Larger touch targets for mobile --&gt;
                        &lt;StackPanel Orientation=&quot;Horizontal&quot;
                                    Spacing=&quot;12&quot;
                                    Margin=&quot;12,8&quot;&gt;
                            &lt;Ellipse Width=&quot;48&quot; Height=&quot;48&quot;
                                     Fill=&quot;#0078d4&quot; /&gt;
                            &lt;StackPanel VerticalAlignment=&quot;Center&quot;&gt;
                                &lt;TextBlock Text=&quot;{Binding Name}&quot;
                                           FontSize=&quot;16&quot;
                                           FontWeight=&quot;SemiBold&quot; /&gt;
                                &lt;TextBlock Text=&quot;{Binding Email}&quot;
                                           FontSize=&quot;14&quot;
                                           Foreground=&quot;Gray&quot; /&gt;
                            &lt;/StackPanel&gt;
                        &lt;/StackPanel&gt;
                    &lt;/DataTemplate&gt;
                &lt;/ListBox.ItemTemplate&gt;
            &lt;/ListBox&gt;
        &lt;/DockPanel&gt;

        &lt;!-- Detail view (full screen, overlays list) --&gt;
        &lt;DockPanel IsVisible=&quot;{Binding IsDetailVisible}&quot;&gt;
            &lt;!-- Back button --&gt;
            &lt;Button DockPanel.Dock=&quot;Top&quot;
                    Content=&quot;← Back&quot;
                    Command=&quot;{Binding GoBackCommand}&quot;
                    Padding=&quot;16,12&quot;
                    FontSize=&quot;16&quot;
                    Background=&quot;Transparent&quot;
                    HorizontalAlignment=&quot;Left&quot; /&gt;

            &lt;ScrollViewer Padding=&quot;16&quot;&gt;
                &lt;StackPanel Spacing=&quot;16&quot;&gt;
                    &lt;TextBlock Text=&quot;{Binding SelectedContact.Name}&quot;
                               FontSize=&quot;24&quot; FontWeight=&quot;Bold&quot; /&gt;
                    &lt;TextBlock Text=&quot;{Binding SelectedContact.Email}&quot;
                               FontSize=&quot;16&quot; /&gt;
                    &lt;TextBlock Text=&quot;{Binding SelectedContact.Phone}&quot;
                               FontSize=&quot;16&quot; /&gt;
                    &lt;TextBlock Text=&quot;{Binding SelectedContact.Notes}&quot;
                               FontSize=&quot;16&quot;
                               TextWrapping=&quot;Wrap&quot; /&gt;
                &lt;/StackPanel&gt;
            &lt;/ScrollViewer&gt;
        &lt;/DockPanel&gt;
    &lt;/Panel&gt;

&lt;/UserControl&gt;
</code></pre>
<p>The key differences in the mobile version:</p>
<ul>
<li>Larger text (<code>FontSize=&quot;16&quot;</code> everywhere) for readability</li>
<li>Larger touch targets (48px avatars, 16px padding on buttons)</li>
<li>Full-screen navigation instead of side-by-side panels</li>
<li>An explicit &quot;Back&quot; button since there is no always-visible list</li>
<li><code>IsDetailVisible</code> boolean that toggles between list and detail views</li>
</ul>
<p>Both views share the exact same <code>ContactsViewModel</code> — the business logic does not change, only the presentation.</p>
<h3 id="platform-specific-code-in-c">Platform-Specific Code in C#</h3>
<p>Sometimes you need to execute different code depending on the platform. The .NET <code>OperatingSystem</code> class provides static methods:</p>
<pre><code class="language-csharp">public void ConfigurePlatformFeatures()
{
    if (OperatingSystem.IsWindows())
    {
        // Set up Windows-specific features like jump lists
    }
    else if (OperatingSystem.IsMacOS())
    {
        // Configure macOS menu bar
    }
    else if (OperatingSystem.IsLinux())
    {
        // Linux-specific setup
    }
    else if (OperatingSystem.IsAndroid())
    {
        // Android permissions, status bar color, etc.
    }
    else if (OperatingSystem.IsIOS())
    {
        // iOS setup, safe areas, etc.
    }
    else if (OperatingSystem.IsBrowser())
    {
        // WebAssembly-specific setup
    }
}
</code></pre>
<h2 id="building-for-each-platform">Building for Each Platform</h2>
<h3 id="desktop-windows-macos-linux">Desktop (Windows, macOS, Linux)</h3>
<p>The default template targets desktop. Build and run with:</p>
<pre><code class="language-bash">dotnet run
</code></pre>
<p>To publish a self-contained binary:</p>
<pre><code class="language-bash"># Windows
dotnet publish -c Release -r win-x64 --self-contained

# macOS (Apple Silicon)
dotnet publish -c Release -r osx-arm64 --self-contained

# Linux
dotnet publish -c Release -r linux-x64 --self-contained
</code></pre>
<h3 id="android">Android</h3>
<p>Add the Android target to your project. The Avalonia templates include an Android head project:</p>
<pre><code class="language-bash">dotnet new avalonia.xplat -o MyCrossApp
</code></pre>
<p>This creates a solution with separate head projects for each platform:</p>
<pre><code>MyCrossApp/
├── MyCrossApp/                    # Shared code (ViewModels, Models)
├── MyCrossApp.Desktop/            # Desktop entry point
├── MyCrossApp.Android/            # Android entry point
├── MyCrossApp.iOS/                # iOS entry point
└── MyCrossApp.Browser/            # WebAssembly entry point
</code></pre>
<p>The Android project's <code>MainActivity.cs</code>:</p>
<pre><code class="language-csharp">using Android.App;
using Android.Content.PM;
using Avalonia;
using Avalonia.Android;

namespace MyCrossApp.Android;

[Activity(
    Label = &quot;MyCrossApp&quot;,
    Theme = &quot;@style/MyTheme.NoActionBar&quot;,
    Icon = &quot;@drawable/icon&quot;,
    MainLauncher = true,
    ConfigurationChanges = ConfigChanges.Orientation
                         | ConfigChanges.ScreenSize
                         | ConfigChanges.UiMode)]
public class MainActivity : AvaloniaMainActivity&lt;App&gt;
{
    protected override AppBuilder CustomizeAppBuilder(AppBuilder builder) =&gt;
        base.CustomizeAppBuilder(builder)
            .WithInterFont();
}
</code></pre>
<p>Build and deploy to an Android device:</p>
<pre><code class="language-bash">dotnet build -t:Run -f net10.0-android
</code></pre>
<h3 id="ios">iOS</h3>
<p>The iOS entry point is similar:</p>
<pre><code class="language-csharp">using Avalonia;
using Avalonia.iOS;
using Foundation;
using UIKit;

namespace MyCrossApp.iOS;

[Register(&quot;AppDelegate&quot;)]
public partial class AppDelegate : AvaloniaAppDelegate&lt;App&gt;
{
    protected override AppBuilder CustomizeAppBuilder(AppBuilder builder) =&gt;
        base.CustomizeAppBuilder(builder)
            .WithInterFont();
}
</code></pre>
<p>Build for iOS (requires macOS with Xcode):</p>
<pre><code class="language-bash">dotnet build -t:Run -f net10.0-ios
</code></pre>
<h3 id="webassembly">WebAssembly</h3>
<p>The Browser project uses Avalonia's WebAssembly support:</p>
<pre><code class="language-csharp">using Avalonia;
using Avalonia.Browser;
using MyCrossApp;

internal sealed partial class Program
{
    private static Task Main(string[] args) =&gt;
        BuildAvaloniaApp()
            .WithInterFont()
            .StartBrowserAppAsync(&quot;out&quot;);

    public static AppBuilder BuildAvaloniaApp() =&gt;
        AppBuilder.Configure&lt;App&gt;();
}
</code></pre>
<p>Build and serve:</p>
<pre><code class="language-bash">dotnet run --project MyCrossApp.Browser
</code></pre>
<h2 id="common-controls-reference">Common Controls Reference</h2>
<p>Here is a quick reference of the most commonly used controls, with AXAML examples:</p>
<h3 id="text-display-and-input">Text Display and Input</h3>
<pre><code class="language-xml">&lt;!-- Read-only text --&gt;
&lt;TextBlock Text=&quot;Static text&quot; FontSize=&quot;16&quot; /&gt;

&lt;!-- Selectable text --&gt;
&lt;SelectableTextBlock Text=&quot;You can select and copy this text&quot; /&gt;

&lt;!-- Single-line input --&gt;
&lt;TextBox Text=&quot;{Binding Name}&quot;
         Watermark=&quot;Enter your name&quot;
         MaxLength=&quot;100&quot; /&gt;

&lt;!-- Multi-line input --&gt;
&lt;TextBox Text=&quot;{Binding Notes}&quot;
         AcceptsReturn=&quot;True&quot;
         TextWrapping=&quot;Wrap&quot;
         Height=&quot;120&quot; /&gt;

&lt;!-- Password input --&gt;
&lt;TextBox Text=&quot;{Binding Password}&quot;
         PasswordChar=&quot;●&quot;
         RevealPassword=&quot;{Binding ShowPassword}&quot; /&gt;

&lt;!-- Numeric input --&gt;
&lt;NumericUpDown Value=&quot;{Binding Quantity}&quot;
               Minimum=&quot;0&quot; Maximum=&quot;100&quot;
               Increment=&quot;1&quot; /&gt;
</code></pre>
<h3 id="selection-controls">Selection Controls</h3>
<pre><code class="language-xml">&lt;!-- Checkbox --&gt;
&lt;CheckBox IsChecked=&quot;{Binding AgreeToTerms}&quot;
          Content=&quot;I agree to the terms and conditions&quot; /&gt;

&lt;!-- Radio buttons --&gt;
&lt;StackPanel Spacing=&quot;8&quot;&gt;
    &lt;RadioButton GroupName=&quot;Size&quot; Content=&quot;Small&quot;
                 IsChecked=&quot;{Binding IsSmall}&quot; /&gt;
    &lt;RadioButton GroupName=&quot;Size&quot; Content=&quot;Medium&quot;
                 IsChecked=&quot;{Binding IsMedium}&quot; /&gt;
    &lt;RadioButton GroupName=&quot;Size&quot; Content=&quot;Large&quot;
                 IsChecked=&quot;{Binding IsLarge}&quot; /&gt;
&lt;/StackPanel&gt;

&lt;!-- Dropdown (ComboBox) --&gt;
&lt;ComboBox ItemsSource=&quot;{Binding Countries}&quot;
          SelectedItem=&quot;{Binding SelectedCountry}&quot;
          PlaceholderText=&quot;Select a country&quot; /&gt;

&lt;!-- Slider --&gt;
&lt;Slider Value=&quot;{Binding Volume}&quot;
        Minimum=&quot;0&quot; Maximum=&quot;100&quot;
        TickFrequency=&quot;10&quot;
        IsSnapToTickEnabled=&quot;True&quot; /&gt;

&lt;!-- Toggle switch --&gt;
&lt;ToggleSwitch IsChecked=&quot;{Binding DarkMode}&quot;
              OnContent=&quot;Dark&quot;
              OffContent=&quot;Light&quot; /&gt;

&lt;!-- Date picker --&gt;
&lt;DatePicker SelectedDate=&quot;{Binding BirthDate}&quot; /&gt;
</code></pre>
<h3 id="data-display">Data Display</h3>
<pre><code class="language-xml">&lt;!-- List with data binding --&gt;
&lt;ListBox ItemsSource=&quot;{Binding Customers}&quot;
         SelectedItem=&quot;{Binding SelectedCustomer}&quot;&gt;
    &lt;ListBox.ItemTemplate&gt;
        &lt;DataTemplate&gt;
            &lt;TextBlock Text=&quot;{Binding Name}&quot; /&gt;
        &lt;/DataTemplate&gt;
    &lt;/ListBox.ItemTemplate&gt;
&lt;/ListBox&gt;

&lt;!-- Tree view --&gt;
&lt;TreeView ItemsSource=&quot;{Binding RootFolders}&quot;&gt;
    &lt;TreeView.ItemTemplate&gt;
        &lt;TreeDataTemplate ItemsSource=&quot;{Binding Children}&quot;&gt;
            &lt;TextBlock Text=&quot;{Binding Name}&quot; /&gt;
        &lt;/TreeDataTemplate&gt;
    &lt;/TreeView.ItemTemplate&gt;
&lt;/TreeView&gt;

&lt;!-- Tab control --&gt;
&lt;TabControl&gt;
    &lt;TabItem Header=&quot;General&quot;&gt;
        &lt;TextBlock Text=&quot;General settings here&quot; Margin=&quot;10&quot; /&gt;
    &lt;/TabItem&gt;
    &lt;TabItem Header=&quot;Advanced&quot;&gt;
        &lt;TextBlock Text=&quot;Advanced settings here&quot; Margin=&quot;10&quot; /&gt;
    &lt;/TabItem&gt;
    &lt;TabItem Header=&quot;About&quot;&gt;
        &lt;TextBlock Text=&quot;Version 1.0&quot; Margin=&quot;10&quot; /&gt;
    &lt;/TabItem&gt;
&lt;/TabControl&gt;
</code></pre>
<h3 id="progress-and-status">Progress and Status</h3>
<pre><code class="language-xml">&lt;!-- Determinate progress --&gt;
&lt;ProgressBar Value=&quot;{Binding DownloadProgress}&quot;
             Maximum=&quot;100&quot;
             ShowProgressText=&quot;True&quot; /&gt;

&lt;!-- Indeterminate (spinning) --&gt;
&lt;ProgressBar IsIndeterminate=&quot;True&quot; /&gt;

&lt;!-- Expander (collapsible section) --&gt;
&lt;Expander Header=&quot;Advanced Options&quot; IsExpanded=&quot;False&quot;&gt;
    &lt;StackPanel Spacing=&quot;8&quot; Margin=&quot;0,8,0,0&quot;&gt;
        &lt;CheckBox Content=&quot;Enable logging&quot; /&gt;
        &lt;CheckBox Content=&quot;Verbose output&quot; /&gt;
    &lt;/StackPanel&gt;
&lt;/Expander&gt;
</code></pre>
<h3 id="dialogs-and-overlays">Dialogs and Overlays</h3>
<p>Avalonia does not have a built-in modal dialog system like web browsers' <code>alert()</code> and <code>confirm()</code>. Instead, you typically use the window system:</p>
<pre><code class="language-csharp">// Show a message dialog
var dialog = new Window
{
    Title = &quot;Confirm Delete&quot;,
    Width = 400,
    Height = 200,
    WindowStartupLocation = WindowStartupLocation.CenterOwner,
    Content = new StackPanel
    {
        Margin = new Thickness(20),
        Spacing = 16,
        Children =
        {
            new TextBlock
            {
                Text = &quot;Are you sure you want to delete this item?&quot;,
                TextWrapping = TextWrapping.Wrap
            },
            new StackPanel
            {
                Orientation = Avalonia.Layout.Orientation.Horizontal,
                Spacing = 8,
                HorizontalAlignment = HorizontalAlignment.Right,
                Children =
                {
                    new Button { Content = &quot;Cancel&quot; },
                    new Button { Content = &quot;Delete&quot;, Classes = { &quot;danger&quot; } }
                }
            }
        }
    }
};

await dialog.ShowDialog(parentWindow);
</code></pre>
<p>Or you can use a community library like <code>DialogHost.Avalonia</code> for overlay-style dialogs.</p>
<h2 id="what-is-coming-in-avalonia-12">What Is Coming in Avalonia 12</h2>
<p>Avalonia 12 is currently in preview (Preview 1 was released in February 2026) and is expected to reach stable release in Q4 2026. The two guiding themes are <strong>Performance</strong> and <strong>Stability</strong>.</p>
<h3 id="performance-and-stability-focus">Performance and Stability Focus</h3>
<p>Unlike Avalonia 11, which was a massive release adding multiple new platforms and a completely new compositional renderer, Avalonia 12 is deliberately conservative. The goal is a rock-solid foundation that the ecosystem can build on for years. Some of the largest enterprise users are already running nightly builds in production to access Android performance improvements.</p>
<p>On the Android platform specifically, Avalonia 12 includes a new dispatcher implementation based on Looper and MessageQueue that improves scheduling reliability. GPU and CPU underutilisation at high refresh rates has been addressed. Multiple activities with Avalonia content are now supported.</p>
<h3 id="breaking-changes-you-need-to-know">Breaking Changes You Need to Know</h3>
<p><strong>Minimum target is now .NET 8.</strong> Support for <code>netstandard2.0</code> and <code>.NET Framework 4.x</code> has been dropped. According to Avalonia's telemetry, these targets account for less than 4% of projects. The team has committed to supporting .NET 8 for the full lifecycle of Avalonia 12.</p>
<p><strong>SkiaSharp 3.0 is required.</strong> SkiaSharp 2.88 support has been removed.</p>
<p><strong>Compiled bindings are now the default.</strong> The <code>AvaloniaUseCompiledBindingsByDefault</code> property is now <code>true</code> by default. Any <code>{Binding}</code> usage in AXAML maps to <code>{CompiledBinding}</code>. This means your bindings are faster and errors are caught at build time, but it also means you must specify <code>x:DataType</code> on your views.</p>
<p><strong>Binding plugins removed.</strong> The binding plugin system (including the data annotations validation plugin) has been removed. This was effectively unused by most developers and conflicted with popular frameworks like CommunityToolkit.Mvvm.</p>
<p><strong>Window decorations overhaul.</strong> A new <code>WindowDrawnDecorations</code> class replaces the old <code>TitleBar</code>, <code>CaptionButtons</code>, and <code>ChromeOverlayLayer</code> types. The <code>SystemDecorations</code> property has been renamed to <code>WindowDecorations</code>. This enables themeable, fully-drawn window chrome.</p>
<p><strong>Selection behavior unified.</strong> Touch and pen input now triggers selection on pointer release (not press), matching native platform conventions.</p>
<p><strong>TopLevel changes.</strong> A <code>TopLevel</code> object is no longer necessarily at the root of the visual hierarchy. Code that casts the top Visual to <code>TopLevel</code> will break. Use <code>TopLevel.GetTopLevel(visual)</code> instead.</p>
<h3 id="migration-from-avalonia-11">Migration from Avalonia 11</h3>
<p>If you have been addressing deprecation warnings in Avalonia 11, migration should be straightforward. The team has published a complete breaking changes guide. Here is a practical migration checklist:</p>
<pre><code class="language-xml">&lt;!-- Before (Avalonia 11) --&gt;
&lt;Window SystemDecorations=&quot;Full&quot; ... &gt;

&lt;!-- After (Avalonia 12) --&gt;
&lt;Window WindowDecorations=&quot;Full&quot; ... &gt;
</code></pre>
<pre><code class="language-csharp">// Before (Avalonia 11)
var topLevel = (TopLevel)visual.GetVisualRoot()!;

// After (Avalonia 12)
var topLevel = TopLevel.GetTopLevel(visual)!;
</code></pre>
<pre><code class="language-xml">&lt;!-- Before (Avalonia 11) — might work without x:DataType --&gt;
&lt;TextBlock Text=&quot;{Binding Name}&quot; /&gt;

&lt;!-- After (Avalonia 12) — x:DataType required for compiled bindings --&gt;
&lt;UserControl x:DataType=&quot;vm:MyViewModel&quot; ...&gt;
    &lt;TextBlock Text=&quot;{Binding Name}&quot; /&gt;
&lt;/UserControl&gt;
</code></pre>
<h3 id="webview-going-open-source">WebView Going Open Source</h3>
<p>One of the most exciting announcements for Avalonia 12 is that the WebView control is going open source. Previously, WebView was a commercial-only feature in Avalonia's Accelerate product. The WebView uses native platform web rendering (Edge WebView2 on Windows, WebKit on macOS/iOS, WebView on Android) rather than bundling Chromium, keeping your application lean.</p>
<p>The Avalonia team acknowledged that embedding web content has become a baseline requirement for many applications — OAuth flows, documentation rendering, rich content display — and gating it behind a commercial licence was no longer the right decision. The open-source WebView will ship in an upcoming Avalonia 12 pre-release.</p>
<h3 id="new-table-control">New Table Control</h3>
<p>Avalonia 12 will include a new read-only <code>Table</code> control for displaying tabular data. This is entirely open-source and free. For complex data grids with editing, sorting, and advanced features, the existing open-source <code>TreeDataGrid</code> remains available (and can be forked), or commercial offerings provide additional capabilities.</p>
<h2 id="beyond-avalonia-12-the-rendering-revolution">Beyond Avalonia 12: The Rendering Revolution</h2>
<h3 id="the-vello-experiment">The Vello Experiment</h3>
<p>Avalonia's rendering has been built on SkiaSharp since the project's earliest days. SkiaSharp provides .NET bindings for Skia, Google's 2D graphics library that also powers Chrome and (formerly) Flutter. It is mature, stable, and well-understood.</p>
<p>But Avalonia is now exploring GPU-first rendering as a next step. Among several approaches being investigated, Vello — a modern graphics engine written in Rust — has shown particularly interesting early results.</p>
<p>Vello is &quot;GPU-first&quot; by design. Traditional rendering pipelines (including Skia) perform most work on the CPU and use the GPU primarily for final compositing. Vello inverts this model, pushing nearly all rendering computation to the GPU using compute shaders.</p>
<p>Early stress testing shows tens of thousands of animated vector paths running at smooth 120 FPS. In certain workloads, the Avalonia team observed Vello performing up to 100x faster than SkiaSharp. Even when running through a Skia-compatibility shim built on top of Vello, they saw 8x speed improvements.</p>
<p>The community has already started building on this. Wiesław Šoltés has published VelloSharp, a .NET binding library for Vello with Avalonia integration packages, including chart controls and canvas controls powered by Vello rendering.</p>
<p>However, Vello is not a drop-in replacement. SkiaSharp will remain the default renderer for the foreseeable future. The Vello work will ship as experimental backends during the Avalonia 12 lifecycle.</p>
<h3 id="the-impeller-partnership-with-google">The Impeller Partnership with Google</h3>
<p>In a surprising move, the Avalonia team announced a partnership with Google's Flutter engineers to bring Impeller — Flutter's next-generation GPU-first renderer — to .NET.</p>
<p>Impeller was created to solve real-world performance challenges Flutter encountered with Skia, particularly shader compilation &quot;jank&quot; (visible stuttering the first time a shader is compiled on a device). It pre-compiles all shader pipelines at build time, eliminating runtime compilation entirely.</p>
<p>Why Impeller over Vello? Early testing revealed an important tradeoff: while Vello achieved identical frame rates to Impeller in benchmarks, it required roughly twelve times more power to do so. For battery-powered mobile devices, that difference is significant.</p>
<p>Flutter's production benchmarks with Impeller show impressive improvements: faster SVG and path rendering, improved Gaussian blur throughput, frame times for complex clipping reduced from 450ms with Skia to 11ms with Impeller, no shader compilation stutter, and around 100MB less memory usage.</p>
<p>The Impeller integration is experimental and all development is happening in public. The goal is to benefit not just Avalonia but the entire .NET ecosystem.</p>
<h3 id="avalonia-maui-bringing-linux-and-wasm-to.net-maui">Avalonia MAUI: Bringing Linux and WASM to .NET MAUI</h3>
<p>In another ambitious initiative, the Avalonia team is building handlers that let .NET MAUI applications run on Linux and WebAssembly — two platforms that Microsoft's MAUI does not support. The first preview was announced in March 2026, running on .NET 11 (itself in preview).</p>
<p>The approach works by building a single set of Avalonia-based handlers that map MAUI controls to Avalonia equivalents. Because Avalonia already includes a SkiaSharp-based renderer, it can leverage the existing <code>Microsoft.Maui.Graphics</code> and <code>SkiaSharp.Controls.Maui</code> libraries. This means many MAUI controls work with minimal changes.</p>
<p>This work has also been driving improvements back into Avalonia itself, with new controls like <code>SwipeView</code> and API enhancements like letter-spacing support propagated to every control.</p>
<h2 id="licensing-and-costs">Licensing and Costs</h2>
<p>This is an important topic for the Observer Magazine audience, since our philosophy is that everything should be free — no &quot;free for non-commercial&quot; caveats.</p>
<p><strong>Avalonia UI core framework: MIT license, free forever.</strong> You can build and ship commercial applications with it, no payment required, no restrictions. This is not changing.</p>
<p><strong>Avalonia Accelerate</strong> is the commercial tooling suite built around the framework. It includes a rewritten Visual Studio extension, Dev Tools (a runtime inspector), and Parcel (a packaging tool). Accelerate has a Community Edition that is free for individual developers, small organizations (fewer than 250 people / less than €1M revenue), and educational institutions. Enterprise organizations need a paid license only if they want to use these new Accelerate tools — they can always use the core framework and the legacy open-source tooling for free.</p>
<p><strong>JetBrains Rider and VS Code extensions remain free</strong> regardless of organization size.</p>
<p>For our project, we can use Avalonia without any cost, forever. The core framework, the community tooling, and the IDE extensions for Rider and VS Code are all free.</p>
<h2 id="setting-up-an-avalonia-project-with-modern.net-practices">Setting Up an Avalonia Project with Modern .NET Practices</h2>
<p>Here is how to set up an Avalonia project using the same modern .NET practices we use in Observer Magazine — <code>.slnx</code> solution format, <code>Directory.Build.props</code>, and central package management:</p>
<h3 id="global.json">global.json</h3>
<pre><code class="language-json">{
  &quot;sdk&quot;: {
    &quot;version&quot;: &quot;10.0.104&quot;,
    &quot;rollForward&quot;: &quot;latestFeature&quot;
  }
}
</code></pre>
<h3 id="directory.build.props">Directory.Build.props</h3>
<pre><code class="language-xml">&lt;Project&gt;
  &lt;PropertyGroup&gt;
    &lt;TargetFramework&gt;net10.0&lt;/TargetFramework&gt;
    &lt;Nullable&gt;enable&lt;/Nullable&gt;
    &lt;ImplicitUsings&gt;enable&lt;/ImplicitUsings&gt;
    &lt;TreatWarningsAsErrors&gt;true&lt;/TreatWarningsAsErrors&gt;
    &lt;AvaloniaUseCompiledBindingsByDefault&gt;true&lt;/AvaloniaUseCompiledBindingsByDefault&gt;
  &lt;/PropertyGroup&gt;
&lt;/Project&gt;
</code></pre>
<h3 id="directory.packages.props">Directory.Packages.props</h3>
<pre><code class="language-xml">&lt;Project&gt;
  &lt;PropertyGroup&gt;
    &lt;ManagePackageVersionsCentrally&gt;true&lt;/ManagePackageVersionsCentrally&gt;
    &lt;AvaloniaVersion&gt;11.3.0&lt;/AvaloniaVersion&gt;
    &lt;CommunityToolkitVersion&gt;8.4.0&lt;/CommunityToolkitVersion&gt;
  &lt;/PropertyGroup&gt;

  &lt;ItemGroup&gt;
    &lt;PackageVersion Include=&quot;Avalonia&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Desktop&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.iOS&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Android&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Browser&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Themes.Fluent&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Fonts.Inter&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Diagnostics&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;CommunityToolkit.Mvvm&quot;
                    Version=&quot;$(CommunityToolkitVersion)&quot; /&gt;

    &lt;!-- Testing --&gt;
    &lt;PackageVersion Include=&quot;Avalonia.Headless.XUnit&quot; Version=&quot;$(AvaloniaVersion)&quot; /&gt;
    &lt;PackageVersion Include=&quot;xunit.v3&quot; Version=&quot;3.2.2&quot; /&gt;
    &lt;PackageVersion Include=&quot;Microsoft.NET.Test.Sdk&quot; Version=&quot;18.3.0&quot; /&gt;
  &lt;/ItemGroup&gt;
&lt;/Project&gt;
</code></pre>
<h3 id="solution-file-myapp.slnx">Solution File (MyApp.slnx)</h3>
<pre><code class="language-xml">&lt;Solution&gt;
  &lt;Folder Name=&quot;/Solution Items/&quot;&gt;
    &lt;File Path=&quot;Directory.Build.props&quot; /&gt;
    &lt;File Path=&quot;Directory.Packages.props&quot; /&gt;
    &lt;File Path=&quot;global.json&quot; /&gt;
  &lt;/Folder&gt;
  &lt;Folder Name=&quot;/src/&quot;&gt;
    &lt;Project Path=&quot;src/MyApp/MyApp.csproj&quot; /&gt;
    &lt;Project Path=&quot;src/MyApp.Desktop/MyApp.Desktop.csproj&quot; /&gt;
    &lt;Project Path=&quot;src/MyApp.Android/MyApp.Android.csproj&quot; /&gt;
    &lt;Project Path=&quot;src/MyApp.iOS/MyApp.iOS.csproj&quot; /&gt;
    &lt;Project Path=&quot;src/MyApp.Browser/MyApp.Browser.csproj&quot; /&gt;
  &lt;/Folder&gt;
  &lt;Folder Name=&quot;/tests/&quot;&gt;
    &lt;Project Path=&quot;tests/MyApp.Tests/MyApp.Tests.csproj&quot; /&gt;
  &lt;/Folder&gt;
&lt;/Solution&gt;
</code></pre>
<h2 id="testing-avalonia-applications">Testing Avalonia Applications</h2>
<p>Avalonia supports headless testing — running your UI without a visible window. This is perfect for CI/CD pipelines:</p>
<pre><code class="language-csharp">using Avalonia.Headless.XUnit;
using MyApp.ViewModels;
using MyApp.Views;
using Xunit;

namespace MyApp.Tests;

public class MainWindowTests
{
    [AvaloniaFact]
    public void MainWindow_Should_Render_Title()
    {
        var window = new MainWindow
        {
            DataContext = new MainWindowViewModel()
        };

        window.Show();

        // Find the title TextBlock by name
        var title = window.FindControl&lt;TextBlock&gt;(&quot;PageTitle&quot;);
        Assert.NotNull(title);
        Assert.Equal(&quot;Dashboard&quot;, title.Text);
    }

    [AvaloniaFact]
    public void Button_Click_Should_Increment_Counter()
    {
        var vm = new MainWindowViewModel();
        var window = new MainWindow { DataContext = vm };

        window.Show();

        Assert.Equal(0, vm.ClickCount);

        vm.IncrementCountCommand.Execute(null);

        Assert.Equal(1, vm.ClickCount);
    }
}
</code></pre>
<p>The <code>[AvaloniaFact]</code> attribute (from <code>Avalonia.Headless.XUnit</code>) sets up the Avalonia runtime in headless mode before each test.</p>
<h2 id="putting-it-all-together-a-production-architecture">Putting It All Together: A Production Architecture</h2>
<p>Here is a summary architecture for a production cross-platform Avalonia application:</p>
<pre><code>MyProductionApp/
├── global.json
├── Directory.Build.props
├── Directory.Packages.props
├── MyApp.slnx
│
├── src/
│   ├── MyApp/                          # Shared library
│   │   ├── MyApp.csproj
│   │   ├── App.axaml                   # Application root
│   │   ├── App.axaml.cs
│   │   ├── ViewLocator.cs
│   │   ├── Models/                     # Domain objects
│   │   ├── ViewModels/                 # MVVM ViewModels
│   │   ├── Services/                   # Business logic
│   │   │   ├── IDataService.cs
│   │   │   ├── SqliteDataService.cs
│   │   │   └── ApiDataService.cs
│   │   ├── Views/
│   │   │   ├── Desktop/                # Desktop-specific views
│   │   │   ├── Mobile/                 # Mobile-specific views
│   │   │   └── Shared/                 # Shared components
│   │   └── Styles/
│   │       ├── Desktop.axaml
│   │       └── Mobile.axaml
│   │
│   ├── MyApp.Desktop/                  # Desktop entry point
│   │   ├── MyApp.Desktop.csproj
│   │   └── Program.cs
│   │
│   ├── MyApp.Android/                  # Android entry point
│   │   ├── MyApp.Android.csproj
│   │   └── MainActivity.cs
│   │
│   ├── MyApp.iOS/                      # iOS entry point
│   │   ├── MyApp.iOS.csproj
│   │   └── AppDelegate.cs
│   │
│   └── MyApp.Browser/                  # WebAssembly entry point
│       ├── MyApp.Browser.csproj
│       └── Program.cs
│
└── tests/
    └── MyApp.Tests/
        ├── MyApp.Tests.csproj
        ├── ViewModelTests/
        └── ViewTests/
</code></pre>
<p>The shared library (<code>MyApp</code>) contains all your views, view models, models, and services. The platform-specific projects (<code>MyApp.Desktop</code>, <code>MyApp.Android</code>, etc.) are thin wrappers that just configure the platform entry point and reference the shared library.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Avalonia UI occupies a unique position in the .NET ecosystem. It is the only framework that gives you pixel-perfect consistency across Windows, macOS, Linux, iOS, Android, and WebAssembly from a single codebase, using familiar XAML-based tooling. The MIT license means you can use it for anything, forever, at no cost.</p>
<p>The current stable release (11.3) is production-ready and used by major companies. Container Queries bring modern responsive design patterns to native application development. The <code>OnPlatform</code> and <code>OnFormFactor</code> markup extensions make it straightforward to customize behavior per platform and device type.</p>
<p>Avalonia 12 (currently in preview, targeting Q4 2026 stable release) doubles down on performance and stability, with significant Android improvements, compiled bindings by default, a new open-source WebView, and a new Table control. The upcoming rendering revolution — with experimental Vello backends and the Impeller partnership with Google — points toward a future where Avalonia applications run faster than ever on modern GPU hardware.</p>
<p>If you are a web developer looking to build native cross-platform applications without leaving the .NET ecosystem, Avalonia is the most compelling option available today. The learning curve from web development is manageable — AXAML is conceptually similar to HTML, Avalonia's styling system borrows heavily from CSS concepts, and the MVVM pattern maps naturally to the component-based architecture you already know.</p>
<p>The best way to learn is to build something. Install the templates, create a project, and start experimenting. The community is active on GitHub and the Avalonia documentation continues to improve rapidly.</p>
<p>Welcome to the world of truly cross-platform native development.</p>
<h2 id="resources">Resources</h2>
<ul>
<li><strong>Official Documentation</strong>: <a href="https://docs.avaloniaui.net">docs.avaloniaui.net</a></li>
<li><strong>GitHub Repository</strong>: <a href="https://github.com/AvaloniaUI/Avalonia">github.com/AvaloniaUI/Avalonia</a> (30,000+ stars)</li>
<li><strong>Sample Projects</strong>: <a href="https://github.com/AvaloniaUI/Avalonia.Samples">github.com/AvaloniaUI/Avalonia.Samples</a></li>
<li><strong>Avalonia 12 Breaking Changes</strong>: <a href="https://docs.avaloniaui.net/docs/avalonia12-breaking-changes">docs.avaloniaui.net/docs/avalonia12-breaking-changes</a></li>
<li><strong>Container Queries Documentation</strong>: <a href="https://docs.avaloniaui.net/docs/basics/user-interface/styling/container-queries">docs.avaloniaui.net/docs/basics/user-interface/styling/container-queries</a></li>
<li><strong>Platform-Specific XAML</strong>: <a href="https://docs.avaloniaui.net/docs/guides/platforms/platform-specific-code/xaml">docs.avaloniaui.net/docs/guides/platforms/platform-specific-code/xaml</a></li>
</ul>
]]></content:encoded>
      <category>avalonia</category>
      <category>dotnet</category>
      <category>cross-platform</category>
      <category>desktop</category>
      <category>mobile</category>
      <category>xaml</category>
      <category>csharp</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Green Light Doesn't Mean Go — It Means You May Go</title>
      <link>https://observermagazine.github.io/blog/honk-drive</link>
      <description>A lesson learned at a red light that applies to every decision you will ever make in life, work, and everything in between.</description>
      <pubDate>Mon, 23 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/honk-drive</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="the-truck-behind-you-is-not-your-boss">The Truck Behind You Is Not Your Boss</h2>
<p>Picture this. You are sitting at a red light. Maybe it is a Tuesday morning. Maybe you slept badly. Maybe you are running a little late. The light turns green. And before your foot has even moved toward the gas pedal — <em>honk</em>. The driver behind you, piloting a Ford F-150 the size of a small building, has decided that you are the problem.</p>
<p>What do you do?</p>
<p>Most people instinctively hit the gas. Not because it is safe. Not because they have checked the intersection. But because being honked at feels like being told off by a teacher, and the lizard part of our brain wants to comply and make the discomfort stop.</p>
<p>Here is the thing, though. <strong>That green light does not order you to go. It permits you to go.</strong></p>
<p>There is a meaningful difference between those two things — and understanding it might be one of the most useful mental models you ever pick up.</p>
<hr />
<h2 id="what-the-intersection-actually-looks-like">What the Intersection Actually Looks Like</h2>
<p>Let us slow the moment down.</p>
<p>The light turns green. The truck honks. You feel the pressure. But what is actually happening in that intersection?</p>
<ul>
<li>There may be a car that jumped its red light and is still clearing the box.</li>
<li>There may be a cyclist coming through on the left that you can see but the truck — sitting higher and further back — cannot.</li>
<li>There may be a driver doing 80 miles per hour who ran the light entirely and is about to enter the intersection in the next two seconds.</li>
</ul>
<p>You, in the driver's seat, have information the truck driver does not have. You have the view. You have the angle. You have the responsibility. And crucially, <strong>you are the one whose life is on the line if you get it wrong.</strong></p>
<p>The truck driver experiences zero consequences if you pull out and get T-boned. He will be inconvenienced. He might feel bad. But he goes home. You are the one gambling.</p>
<p>So when you take that extra second — or two, or three — to check that it is genuinely clear before you go, that is not timidity. That is not weakness. That is exactly what a careful, thinking person is supposed to do.</p>
<hr />
<h2 id="free-will-at-the-green-light">Free Will at the Green Light</h2>
<p>There is something quietly radical about that pause.</p>
<p>In that moment, you are exercising one of the most underrated things a human being has: the freedom to not be rushed into a decision by someone else's impatience.</p>
<p>You did not ask for that honk. You did not agree to be managed by a stranger. And yet social pressure — even the blunt, anonymous kind that comes from a car horn — is remarkably effective at overriding our own judgment.</p>
<p>Recognising that you have a choice, even when someone is pushing you, is a skill. It does not come naturally to most people. But once you feel it — once you sit in that driver's seat and consciously decide <em>I will go when I am ready and not a moment before</em> — it changes something.</p>
<hr />
<h2 id="now-apply-it-to-everything-else">Now Apply It to Everything Else</h2>
<p>You might be reading this thinking: fine, interesting driving tip, but what does this have to do with my life?</p>
<p>Everything.</p>
<p>Every day, in work and in life, you are sitting at green lights with someone behind you leaning on the horn. The situations change. The pressure does not.</p>
<h3 id="at-work">At Work</h3>
<p>Your manager sends a message at 4:58 PM asking for a report &quot;as soon as possible.&quot; Your gut says to fire off whatever you have and hit send before 5. But is the report actually ready? Is the data right? Will a rushed report serve you — or will it come back to bite you next week when someone finds the error you missed?</p>
<p>The truck is honking. The light is green. But is the intersection clear?</p>
<p>A better move: take a breath, reply to acknowledge the request, and send the report when it is accurate. A good manager would rather have a correct report at 9 AM than a wrong one at 5 PM. And if they would not — that tells you something important about them.</p>
<h3 id="in-a-negotiation">In a Negotiation</h3>
<p>You are buying a house, a car, or signing a contract. The other party says the offer expires tonight. <em>We have three other buyers. You need to decide now.</em></p>
<p>That is a horn honk. Sometimes it is even true. But more often it is a tactic — pressure designed to make you skip your own due diligence and commit before you have checked the intersection.</p>
<p>The move is the same: pause, look both ways, and proceed only when you are satisfied. Deals that evaporate the moment you ask for a day to think about them are often deals you are better off without.</p>
<h3 id="in-relationships">In Relationships</h3>
<p>A friend, a partner, or a family member wants an answer — <em>now</em>. Are you coming to the event? Do you forgive them? Are you in or out? The emotional equivalent of a horn honk is very real, and it works on us even more powerfully than the literal kind.</p>
<p>You are allowed to say: <em>I need a moment to think about this.</em> That is not cruelty. That is self-respect. Anyone who tells you that taking time to make a thoughtful decision is an act of disrespect is, in all likelihood, someone who benefits from your impulsiveness.</p>
<h3 id="in-your-career">In Your Career</h3>
<p>A recruiter calls with an offer. The role sounds exciting. The salary is good. They need an answer by end of day. What do you do?</p>
<p>Same as always: look left, look right. Do you know enough about the company culture? Have you actually read the contract? Is there something you cannot see from your position — something the person behind you definitely cannot see?</p>
<p>Taking 24 hours to think about a job offer is completely reasonable. If an employer rescinds an offer because you asked for a day to consider it properly, you just learned something invaluable about how they make decisions — before you ever started working for them.</p>
<hr />
<h2 id="the-principle-simply-stated">The Principle, Simply Stated</h2>
<p>You do not owe anyone a rushed decision.</p>
<p>You have the right — and often the responsibility — to take the time needed to make a safe and considered choice. The people pressuring you are not in your seat. They do not have your view. They do not bear your consequences.</p>
<p>This does not mean be paralysed. Green lights are not invitations to sit indefinitely. At some point, you do pull out into the intersection, because staying stopped forever is its own kind of failure. The goal is not to be frozen — the goal is to be <em>deliberate</em>.</p>
<p>Check. Think. Decide. Then go.</p>
<hr />
<h2 id="a-quick-reference-the-green-light-test">A Quick Reference: The Green Light Test</h2>
<p>When you feel pressured to make a fast decision, run through these before you act:</p>
<ol>
<li><strong>Do I have enough information?</strong> If not, what would it take to get it — and how long would that actually require?</li>
<li><strong>Who bears the consequences if this goes wrong?</strong> If the answer is <em>me</em>, then I should be the one setting the pace.</li>
<li><strong>Is this urgency real or manufactured?</strong> Real urgency exists. Artificial urgency is a tactic. Learn to tell the difference.</li>
<li><strong>What does my gut say — underneath the panic?</strong> The noise of someone honking tends to drown out the quieter, wiser voice. Try to hear it.</li>
<li><strong>Would I make this same decision if no one were watching or waiting?</strong> If the answer is no, you have your answer.</li>
</ol>
<hr />
<h2 id="final-thought">Final Thought</h2>
<p>The Ford F-150 driver will survive the extra three seconds it takes you to check the intersection. He will probably not even remember the moment by the time he reaches his destination.</p>
<p>But you will remember — and so will your passengers — whether you made it through safely.</p>
<p>Take the moment. Check the road. Go when you are ready.</p>
<p>That is not hesitation. That is wisdom.</p>
<hr />
<p><em>Published in Observer Magazine. We welcome your thoughts — reach out through the contact page or find us on GitHub at <a href="https://github.com/ObserverMagazine/observermagazine.github.io">ObserverMagazine</a>.</em></p>
]]></content:encoded>
      <category>life-lessons</category>
      <category>decision-making</category>
      <category>work</category>
      <category>mindset</category>
    </item>
    <item>
      <title>The Year 2025 in Review: A Comprehensive Retrospective</title>
      <link>https://observermagazine.github.io/blog/the-year-2025-in-review</link>
      <description>A thorough look back at the major political, economic, technological, scientific, and cultural events that defined the year 2025.</description>
      <pubDate>Sun, 22 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/the-year-2025-in-review</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>The year 2025 was one of the most consequential in recent memory. From a dramatic change in American leadership and its rippling effects across every domain of public life, to breakthroughs in artificial intelligence that rewrote the rules of entire industries, to geopolitical conflicts that continued to reshape the world order, 2025 demanded attention from start to finish. This article attempts to chronicle every major newsworthy event of the year, organized by topic.</p>
<h2 id="part-1-united-states-politics">Part 1: United States Politics</h2>
<h3 id="the-second-trump-administration-begins">The Second Trump Administration Begins</h3>
<p>On January 20, 2025, Donald J. Trump was inaugurated as the 47th President of the United States, beginning his second non-consecutive term. The inauguration itself was moved indoors to the Capitol Rotunda due to dangerously cold weather in Washington, D.C. The ceremony was attended by an unusual number of tech industry leaders, including Elon Musk, Jeff Bezos, Mark Zuckerberg, Tim Cook, and Sundar Pichai, reflecting the evolving relationship between Silicon Valley and the new administration.</p>
<h3 id="executive-orders-and-policy-changes">Executive Orders and Policy Changes</h3>
<p>The administration moved with extraordinary speed in its opening days. On the first day alone, President Trump signed dozens of executive orders covering immigration, energy policy, diversity programs, and federal workforce restructuring.</p>
<p>On immigration, the administration declared a national emergency at the southern border, deployed additional military personnel, and began implementing what it described as the largest deportation operation in American history. The &quot;Remain in Mexico&quot; policy was reinstated. Birthright citizenship was challenged through executive order, though this faced immediate legal challenges and was blocked by federal courts.</p>
<p>Federal diversity, equity, and inclusion (DEI) programs were terminated across all government agencies. Federal employees working in DEI roles were placed on administrative leave. Executive orders directed agencies to investigate and potentially penalize private companies and universities that maintained DEI programs, though enforcement proved complex.</p>
<p>The administration withdrew the United States from the Paris Climate Agreement for a second time. Drilling permits on federal lands were expedited. The Keystone XL pipeline permit was reinstated. Multiple environmental regulations from the previous administration were rescinded or paused.</p>
<h3 id="the-tiktok-ban-and-reprieve">The TikTok Ban and Reprieve</h3>
<p>One of the most closely watched policy dramas of early 2025 involved TikTok. A law passed during the Biden administration required ByteDance, TikTok's Chinese parent company, to divest its ownership of TikTok or face a ban in the United States. The deadline arrived on January 19, 2025, the day before inauguration. TikTok briefly went dark for American users. President Trump then signed an executive order granting a 75-day extension, and later additional extensions, to allow negotiations for a potential sale. Throughout 2025, various consortiums of American investors explored acquisition deals, but no final sale was completed by year's end.</p>
<h3 id="the-department-of-government-efficiency">The Department of Government Efficiency</h3>
<p>Elon Musk led what the administration called the Department of Government Efficiency (DOGE), a task force aimed at dramatically reducing federal spending and workforce. DOGE identified programs it considered wasteful and pushed for their elimination. The effort was controversial, with supporters praising the focus on fiscal responsibility and critics arguing that essential services were being gutted. Federal employee unions challenged many of the actions in court. By mid-2025, DOGE claimed billions in projected savings, though independent analyses disputed the methodology.</p>
<h3 id="pardons-and-legal-matters">Pardons and Legal Matters</h3>
<p>President Trump pardoned or commuted sentences for many individuals convicted in connection with the January 6, 2021 Capitol breach. This was one of the most debated actions of the early administration, with supporters characterizing the defendants as political prisoners and critics arguing that pardoning participants in a violent breach of the Capitol undermined rule of law.</p>
<h3 id="congressional-activity">Congressional Activity</h3>
<p>Republicans held majorities in both the House and Senate, though the margins were thin, particularly in the House. Major legislative efforts included tax reform extending and expanding the 2017 Tax Cuts and Jobs Act provisions, immigration enforcement funding, and defense spending increases. The legislative process was frequently complicated by intra-party disagreements among House Republicans.</p>
<h2 id="part-2-geopolitics-and-international-affairs">Part 2: Geopolitics and International Affairs</h2>
<h3 id="the-russia-ukraine-war">The Russia-Ukraine War</h3>
<p>The war in Ukraine, which began with Russia's full-scale invasion in February 2022, continued throughout 2025. The conflict had become largely a war of attrition along extensive front lines in eastern and southern Ukraine. Both sides conducted offensive operations with limited territorial gains.</p>
<p>President Trump, who had promised to end the war quickly, appointed a special envoy and engaged in diplomatic efforts with both Kyiv and Moscow. The negotiations were complex and produced no ceasefire by mid-2025. The United States adjusted its military aid packages to Ukraine, and there was significant debate about the appropriate level of continued support.</p>
<p>European allies, concerned about potential changes in American commitment, accelerated their own defense spending and military aid to Ukraine. NATO held emergency consultations, and several European nations significantly increased their defense budgets, with many meeting or exceeding the alliance's 2% of GDP target for the first time.</p>
<h3 id="the-middle-east">The Middle East</h3>
<p>The conflict in Gaza that erupted in October 2023 continued to dominate Middle East affairs in 2025. Multiple ceasefire negotiations took place. The humanitarian situation in Gaza was severe, with international organizations reporting widespread destruction and civilian suffering.</p>
<p>The Abraham Accords framework continued to evolve. Diplomatic discussions about Saudi Arabia normalizing relations with Israel proceeded, though the Gaza conflict complicated these efforts. Iran's nuclear program remained a major concern, with inspectors reporting advances in enrichment capabilities.</p>
<p>The Houthi attacks on Red Sea shipping, which had disrupted global trade routes since late 2023, continued into 2025. An international naval coalition attempted to protect shipping lanes, but the attacks persisted, forcing many cargo ships to take the longer route around the Cape of Good Hope.</p>
<h3 id="china-and-the-indo-pacific">China and the Indo-Pacific</h3>
<p>U.S.-China relations remained tense but managed. The Trump administration imposed additional tariffs on Chinese goods, expanded restrictions on technology exports to China (particularly in semiconductors and AI), and maintained a strong naval presence in the South China Sea. China responded with its own retaliatory tariffs and export controls on critical minerals.</p>
<p>Taiwan remained a flashpoint. China conducted military exercises near Taiwan, and the United States continued arms sales to the island. Cross-strait tensions were elevated but did not escalate to direct confrontation.</p>
<h3 id="other-international-events">Other International Events</h3>
<p>In South Korea, President Yoon Suk Yeol faced impeachment proceedings following his brief declaration of martial law in December 2024. The Constitutional Court upheld the impeachment in early 2025, making him the second South Korean president to be removed from office.</p>
<p>Canada held elections in 2025 following the resignation of Prime Minister Justin Trudeau in January, who stepped down amid declining poll numbers and intra-party pressure. Mark Carney became the new Liberal Party leader and then Prime Minister, though he faced a challenging political environment with tariff disputes with the United States dominating the agenda.</p>
<h2 id="part-3-economy-and-finance">Part 3: Economy and Finance</h2>
<h3 id="inflation-and-interest-rates">Inflation and Interest Rates</h3>
<p>The Federal Reserve navigated a complex economic environment in 2025. After cutting rates in the second half of 2024, the Fed paused further cuts in early 2025 as inflation proved persistent. Core inflation remained above the Fed's 2% target for most of the year, influenced by tariff-related price increases on imported goods.</p>
<p>The economy showed resilience in employment numbers, with unemployment remaining low by historical standards. However, consumers reported feeling squeezed by high housing costs, elevated food prices, and the cumulative impact of several years of above-target inflation.</p>
<h3 id="tariffs-and-trade">Tariffs and Trade</h3>
<p>The Trump administration's tariff policies were among the most consequential economic developments of 2025. Tariffs were imposed or increased on goods from China, Canada, Mexico, and the European Union. The stated goals were to protect American manufacturing, reduce trade deficits, and pressure trading partners on various policy issues including immigration and fentanyl trafficking.</p>
<p>The economic effects were debated intensely. Some domestic manufacturers reported benefits from reduced foreign competition. Importers, retailers, and consumers faced higher prices. Agricultural exporters were concerned about retaliatory tariffs affecting their overseas sales. Financial markets reacted with volatility to each tariff announcement and escalation.</p>
<h3 id="technology-sector">Technology Sector</h3>
<p>The technology sector experienced a mixed year. Companies heavily invested in artificial intelligence saw their valuations soar. Nvidia's stock continued its extraordinary run as demand for AI training and inference chips remained insatiable. Microsoft, Google, Amazon, and Meta all reported massive capital expenditure plans for AI infrastructure.</p>
<p>However, the broader tech sector also faced challenges. Layoffs continued at many companies as they restructured around AI capabilities. The advertising market was disrupted by AI-powered tools that changed how content was created and consumed. Regulatory scrutiny of big tech companies continued, with antitrust cases against Google and other companies progressing through the courts.</p>
<h3 id="cryptocurrency">Cryptocurrency</h3>
<p>Cryptocurrency markets rallied significantly in 2025. Bitcoin reached new all-time highs, buoyed by the spot Bitcoin ETFs approved in 2024, institutional adoption, and a generally favorable regulatory stance from the Trump administration. The administration appointed crypto-friendly regulators and signaled support for making the United States a hub for digital asset innovation.</p>
<h2 id="part-4-technology">Part 4: Technology</h2>
<h3 id="artificial-intelligence">Artificial Intelligence</h3>
<p>AI was unquestionably the dominant technology story of 2025, even more so than in the preceding two years.</p>
<p>OpenAI released new models throughout the year, including GPT-4.5 and eventually GPT-5, continuing to push the frontier of language model capabilities. The models demonstrated improved reasoning, reduced hallucination rates, and expanded multimodal capabilities.</p>
<p>Anthropic released Claude 3.5, and later Claude 4, which were noted for their improved instruction following, coding abilities, and safety properties. The company continued to emphasize responsible AI development.</p>
<p>Google DeepMind advanced Gemini with new versions that competed directly with the leading models from OpenAI and Anthropic. Google integrated Gemini deeply into its product suite including Search, Workspace, and Android.</p>
<p>Meta continued its open-source AI strategy with Llama 3 and subsequent models, making powerful AI models freely available to researchers and developers worldwide.</p>
<p>Perhaps the biggest surprise came from DeepSeek, a Chinese AI lab that released models rivaling Western counterparts while reportedly using significantly fewer computational resources and at a fraction of the cost. DeepSeek's R1 reasoning model and its V3 language model demonstrated that the American lead in AI was not as insurmountable as many had assumed. The release sent shockwaves through the AI industry and temporarily rattled the stock prices of AI infrastructure companies.</p>
<p>AI coding assistants became standard developer tools. GitHub Copilot, Cursor, and other tools moved from novelty to essential infrastructure for software development. By mid-2025, surveys showed a majority of professional developers used AI assistance daily.</p>
<p>AI-generated content became ubiquitous. Image generation, video generation, and voice synthesis all improved dramatically. This created both exciting creative possibilities and serious concerns about misinformation, deepfakes, and the economic impact on creative professionals.</p>
<h3 id="space-exploration">Space Exploration</h3>
<p>SpaceX continued to push the boundaries of space technology. The Starship rocket, the largest and most powerful ever built, achieved multiple successful orbital flights and landings in 2025. The rapid iteration pace was remarkable compared to traditional aerospace development timelines.</p>
<p>NASA's Artemis program progressed toward its goal of returning humans to the Moon. Artemis II, the crewed lunar flyby mission, was in advanced preparation.</p>
<p>Blue Origin's New Glenn rocket successfully reached orbit in 2025, giving SpaceX its first serious commercial competition in the heavy-lift launch market.</p>
<p>The commercial space station market grew as the International Space Station approached its planned retirement timeline. Multiple companies developed proposals for private orbital habitats.</p>
<h3 id="consumer-technology">Consumer Technology</h3>
<p>Apple released the iPhone 17 lineup in September 2025, featuring significant AI integration and camera improvements. The Apple Vision Pro, released in February 2024, received a price reduction and expanded to more countries, though mass adoption remained limited by the high price point and limited app ecosystem.</p>
<p>The electric vehicle market continued to grow globally, though the pace of adoption varied by region. Tesla maintained its market leadership but faced increasing competition from Chinese manufacturers like BYD, which surpassed Tesla in total vehicle sales including hybrids.</p>
<p>The foldable phone market expanded with Samsung, Google, and other manufacturers releasing refined models. The form factor moved from novelty to a viable mainstream option.</p>
<h3 id="cybersecurity">Cybersecurity</h3>
<p>Major cybersecurity incidents continued to make headlines. Critical infrastructure attacks, ransomware campaigns against healthcare systems, and state-sponsored espionage operations all occurred. The increasing sophistication of AI-powered attacks raised alarms, as did the potential for AI to be used in creating more convincing phishing campaigns and social engineering attacks.</p>
<h2 id="part-5-science-and-health">Part 5: Science and Health</h2>
<h3 id="climate-and-environment">Climate and Environment</h3>
<p>2025 continued the trend of record-breaking global temperatures. Scientists reported that multiple climate indicators reached new extremes. Severe weather events including hurricanes, floods, droughts, and wildfires affected communities worldwide.</p>
<p>The California wildfires in January 2025, particularly the devastating Palisades and Eaton fires in the Los Angeles area, were among the most destructive in the state's history, destroying thousands of structures and causing billions of dollars in damage.</p>
<h3 id="medicine-and-public-health">Medicine and Public Health</h3>
<p>The post-pandemic era continued to evolve. COVID-19 remained endemic but was no longer a public health emergency. Updated vaccines were available but uptake varied widely. Long COVID continued to be studied, with researchers making progress in understanding its mechanisms.</p>
<p>GLP-1 receptor agonist medications, particularly Ozempic and related drugs originally developed for diabetes, continued their remarkable expansion. New studies throughout 2025 suggested benefits beyond weight loss, including potential cardiovascular benefits, and the drugs became some of the most prescribed medications in history.</p>
<p>Bird flu (H5N1) was a concern throughout 2025, with sporadic human cases reported, primarily among workers in close contact with infected poultry and dairy cattle. Public health agencies monitored the situation closely, concerned about the virus's pandemic potential if it gained efficient human-to-human transmission.</p>
<h3 id="physics-and-astronomy">Physics and Astronomy</h3>
<p>Researchers continued to refine quantum computing technology, though practical quantum advantage for real-world problems remained elusive for most applications. Several companies and universities reported advances in qubit counts and error correction.</p>
<p>The James Webb Space Telescope continued to produce extraordinary astronomical observations, revolutionizing understanding of early galaxy formation, exoplanet atmospheres, and stellar evolution.</p>
<h2 id="part-6-culture-and-society">Part 6: Culture and Society</h2>
<h3 id="entertainment">Entertainment</h3>
<p>The entertainment industry continued to adapt to streaming economics. The strikes that had shut down Hollywood in 2023 resulted in new contracts, but the industry faced ongoing structural changes as studios grappled with the economics of streaming versus theatrical releases.</p>
<p>Video gaming remained the largest entertainment industry by revenue, with continued growth in mobile gaming, live-service games, and the integration of AI into game development.</p>
<h3 id="sports">Sports</h3>
<p>Major sporting events in 2025 included preparation for the 2026 FIFA World Cup to be held across the United States, Canada, and Mexico. Qualification rounds and venue preparations were major stories throughout the year.</p>
<p>In American football, the NFL maintained its position as the most-watched sport in the country.</p>
<h3 id="social-and-cultural-shifts">Social and Cultural Shifts</h3>
<p>The debate over AI's impact on employment and creativity intensified. Artists, writers, musicians, and other creative professionals pushed back against AI systems trained on their work without permission or compensation. Several lawsuits progressing through courts in 2025 sought to define the legal boundaries of AI training data usage.</p>
<p>Social media continued to fragment, with users spread across more platforms than ever. X (formerly Twitter) continued to evolve under Elon Musk's ownership. Bluesky, Threads, and Mastodon attracted users looking for alternatives. TikTok's uncertain future in the United States added to the sense of instability.</p>
<h2 id="part-7-natural-disasters">Part 7: Natural Disasters</h2>
<h3 id="california-wildfires">California Wildfires</h3>
<p>As mentioned above, the January 2025 wildfires in the Los Angeles area were catastrophic. The Palisades Fire and Eaton Fire burned through densely populated areas, destroying entire neighborhoods. The fires were fueled by extreme Santa Ana winds and dry conditions. The recovery and rebuilding effort would take years.</p>
<h3 id="other-disasters">Other Disasters</h3>
<p>Severe weather events occurred worldwide throughout the year. Flooding, hurricanes, and heat waves affected millions of people across multiple continents, reinforcing the urgent need for climate adaptation infrastructure.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The year 2025 was defined by change, upheaval, and acceleration. American politics shifted dramatically with the new administration. AI transformed from an impressive technology to an essential infrastructure layer. Geopolitical conflicts persisted without resolution. The economy navigated tariffs, persistent inflation, and technological disruption simultaneously.</p>
<p>As we look back from early 2026, the full consequences of many 2025 developments are still unfolding. The tariff regime's long-term economic effects, the AI revolution's impact on employment and creativity, and the geopolitical realignments set in motion by changing American foreign policy will all continue to shape the world for years to come.</p>
<p>What is clear is that 2025 was not a year of quiet incremental change. It was a year that bent the trajectory of history in multiple directions at once.</p>
]]></content:encoded>
      <category>retrospective</category>
      <category>politics</category>
      <category>technology</category>
      <category>economics</category>
      <category>science</category>
      <category>culture</category>
      <category>2025</category>
    </item>
    <item>
      <title>Good morning!</title>
      <link>https://observermagazine.github.io/blog/good-morning</link>
      <description>In which I say Good morning to you</description>
      <pubDate>Sun, 22 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/good-morning</guid>
      <author>kushaldeveloper@gmail.com (kushal)</author>
      <content:encoded><![CDATA[<h2 id="good-morning">Good morning</h2>
<p>It is almost eleven in the morning eastern time as I type this.
Hope you are doing well.</p>
]]></content:encoded>
      <category>introductions</category>
    </item>
    <item>
      <title>From .NET Framework 4.7 to .NET 10: A Practical Guide for Enterprise Developers</title>
      <link>https://observermagazine.github.io/blog/modernizing-to-dotnet-10</link>
      <description>A comprehensive guide for enterprise .NET developers who have been working with .NET Framework 4.7 and want to understand what has changed, why it matters, and how to modernize — written for people who code at work and do not tinker with software at home.</description>
      <pubDate>Sun, 22 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/modernizing-to-dotnet-10</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>This article is written specifically for you: the professional .NET developer who works with enterprise software built on .NET Framework 4.7 (or thereabouts), goes home at the end of the day, and does not spend evenings experimenting with the latest frameworks. You have a life. You have responsibilities. Your relationship with software is professional, not recreational. And now someone at your company is talking about migrating to .NET 10, and you want to understand what that actually means without wading through years of release notes.</p>
<p>Let me be direct: the .NET ecosystem has changed more between .NET Framework 4.7 and .NET 10 than it changed in the entire decade before that. But the changes are overwhelmingly positive, and this guide will walk you through every major shift in plain, practical language.</p>
<h2 id="part-1-what-even-is.net-10">Part 1: What Even Is .NET 10?</h2>
<h3 id="the-great-rename">The Great Rename</h3>
<p>The single most confusing thing that happened while you were building enterprise software is that Microsoft renamed everything.</p>
<p>Here is the timeline: .NET Framework 1.0 through 4.8 was the original runtime you know and love. It runs on Windows only. It is in maintenance mode — Microsoft still patches security issues, but no new features are being developed for it. Period.</p>
<p>Starting in 2016, Microsoft built a completely new, cross-platform, open-source runtime called .NET Core. It started at version 1.0 and went up to 3.1. Then, to reduce confusion (which, ironically, increased confusion), they dropped the &quot;Core&quot; suffix and jumped the version number to 5, calling it simply &quot;.NET 5.&quot; This was followed by .NET 6, 7, 8, 9, and now .NET 10.</p>
<p>So when someone says &quot;.NET 10,&quot; they mean the direct successor to .NET Core, not a new version of .NET Framework. It runs on Windows, macOS, and Linux. It is completely open-source. And it is the future of the platform.</p>
<p>.NET 10 is a Long-Term Support (LTS) release, meaning Microsoft will support it with patches and security updates for three years. This matters in enterprise contexts where you need stability guarantees.</p>
<h3 id="what-happened-to.net-framework-4.7">What Happened to .NET Framework 4.7?</h3>
<p>Your existing .NET Framework 4.7 applications will continue to run on Windows. Microsoft has not removed .NET Framework from Windows and has committed to including it in Windows for the foreseeable future. But it will never get new features. No performance improvements. No new language features. No new APIs. It is done.</p>
<p>This does not mean you need to panic. It means you need a plan.</p>
<h2 id="part-2-what-changed-and-why-you-should-care">Part 2: What Changed and Why You Should Care</h2>
<h3 id="c-has-evolved-enormously">C# Has Evolved Enormously</h3>
<p>If your last experience with C# was version 7 (which shipped with .NET Framework 4.7), you have missed C# versions 8, 9, 10, 11, 12, 13, and 14. Each added features that make code shorter, safer, and more readable.</p>
<p>A few highlights that matter most in enterprise code:</p>
<p><strong>Nullable reference types</strong> (C# 8): The compiler now tracks whether a reference variable can be null and warns you about potential null dereference bugs at compile time. This alone prevents an enormous category of runtime NullReferenceException crashes. Enabling this feature in your project is one of the highest-value changes you can make.</p>
<p><strong>Records</strong> (C# 9): Immutable data classes can now be declared in a single line. Instead of writing a class with properties, a constructor, Equals, GetHashCode, and ToString overrides (which you probably were not writing correctly anyway), you write <code>public record Person(string Name, int Age);</code> and the compiler generates all of that for you. This is transformative for DTOs and value objects in enterprise code.</p>
<p><strong>Pattern matching</strong> (C# 8-14): Switch statements now support complex patterns. You can match on types, property values, and combinations thereof. This makes complex business rule evaluation far more readable than chains of if/else statements.</p>
<p><strong>Top-level statements</strong> (C# 9): A console application no longer needs a class with a <code>static void Main</code> method. The entry point is simply code at the top of a file. This is what you see in modern project templates and tutorials. It looks strange at first but is perfectly normal and fully supported.</p>
<p><strong>Raw string literals</strong> (C# 11): No more escaping quotes in SQL queries and JSON templates. Triple-quoted strings handle multi-line text and embedded quotes without escape characters.</p>
<p><strong>Primary constructors</strong> (C# 12): Classes can now declare constructor parameters directly in the class declaration, eliminating boilerplate field assignments.</p>
<h3 id="asp.net-has-been-rewritten">ASP.NET Has Been Rewritten</h3>
<p>ASP.NET in .NET 10 is not an update of the ASP.NET you know. It was rewritten from scratch as ASP.NET Core. The web server is no longer IIS (though IIS can act as a reverse proxy). The default web server is Kestrel, a lightweight, high-performance, cross-platform HTTP server.</p>
<p>The programming model has changed significantly. There is no more <code>Global.asax</code>. There is no more <code>Web.config</code> for application settings (you use <code>appsettings.json</code>). The request pipeline is built with middleware rather than HTTP modules and handlers. Dependency injection is built into the framework rather than bolted on with third-party containers.</p>
<p>The performance difference is staggering. Benchmarks consistently show ASP.NET Core handling 5 to 10 times more requests per second than classic ASP.NET on the same hardware, while using less memory. For enterprise applications processing thousands of concurrent requests, this translates directly to lower infrastructure costs.</p>
<h3 id="blazor-c-in-the-browser">Blazor: C# in the Browser</h3>
<p>One of the most significant new capabilities in modern .NET is Blazor, which lets you build interactive web UIs using C# instead of JavaScript. There are multiple hosting models:</p>
<p><strong>Blazor WebAssembly</strong> compiles your .NET code to WebAssembly and runs it entirely in the browser. No server needed at runtime. The compiled output is static files (HTML, CSS, JS, WASM) that can be hosted anywhere, including free hosting like GitHub Pages. This is what Observer Magazine itself is built with.</p>
<p><strong>Blazor Server</strong> keeps your .NET code on the server and uses SignalR (WebSockets) to maintain a real-time connection with the browser. Every UI interaction sends a message to the server, which processes it and sends back DOM updates. This means faster initial load times (no WASM download) but requires a persistent server connection.</p>
<p><strong>Blazor United</strong> (also called Blazor Web App) in .NET 8 and later combines both models. Pages can start with server-side rendering for instant load times and then switch to WebAssembly for offline capability. In .NET 10, this hybrid model is mature and well-tooled.</p>
<p>For enterprise developers, Blazor means your existing C# skills transfer directly to web development. Your business logic, validation rules, and data models can be shared between server and client. Your team does not need to hire JavaScript specialists or maintain a separate frontend codebase.</p>
<h3 id="entity-framework-core">Entity Framework Core</h3>
<p>Entity Framework has also been rewritten as Entity Framework Core (EF Core). It is faster, supports more databases (SQL Server, PostgreSQL, SQLite, MySQL, and more), and has a cleaner API. However, it is not a drop-in replacement for EF6. The API surface is different enough that migration requires code changes.</p>
<p>EF Core 10 includes features like compiled models for faster startup, improved query translation, bulk operations, and excellent support for JSON columns. For enterprise applications with complex data access patterns, EF Core represents a significant improvement in both performance and developer experience.</p>
<h3 id="native-aot-compilation">Native AOT Compilation</h3>
<p>Perhaps the most revolutionary technical advancement in .NET 10 is Native Ahead-of-Time (AOT) compilation. Traditional .NET applications ship as Intermediate Language (IL) and are compiled to machine code at runtime by the Just-In-Time (JIT) compiler. Native AOT compiles your entire application to a native binary at publish time. The result is an executable that starts in milliseconds instead of seconds, uses significantly less memory, and does not require the .NET runtime to be installed.</p>
<p>For enterprise scenarios, Native AOT is particularly valuable for microservices and serverless functions where cold start time directly affects user experience and cost.</p>
<h2 id="part-3-the-modern.net-ecosystem">Part 3: The Modern .NET Ecosystem</h2>
<h3 id="modern-project-files">Modern Project Files</h3>
<p>If you open a modern .NET project file, you might not recognize it. The old verbose .csproj format with hundreds of lines of XML has been replaced by the SDK-style project format, which typically has fewer than 20 lines. The build system is smarter about discovering source files, so you no longer need to list every .cs file in the project file.</p>
<p>The solution file format has also been modernized. The new SLNX format uses clean XML instead of the old proprietary binary format, making it friendly to Git merges and human reading.</p>
<p>Central Package Management (Directory.Packages.props) lets you define NuGet package versions in a single file at the root of your repository, eliminating version drift across projects in a large solution.</p>
<p>Directory.Build.props lets you set common build properties (target framework, nullable reference types, warning levels) for all projects in a repository from one file.</p>
<h3 id="modern-tooling">Modern Tooling</h3>
<p>The <code>dotnet</code> CLI is now the primary way to create, build, test, and publish .NET applications. You can do everything from the command line: <code>dotnet new</code>, <code>dotnet build</code>, <code>dotnet test</code>, <code>dotnet publish</code>. Visual Studio remains fully supported and is still the preferred IDE for many enterprise developers, but you are no longer tied to it.</p>
<p>JetBrains Rider has become a popular cross-platform alternative to Visual Studio. VS Code with the C# Dev Kit extension is viable for lighter-weight development.</p>
<p>Hot Reload lets you modify code while the application is running and see changes immediately without restarting. This dramatically improves the inner development loop for UI work.</p>
<h3 id="testing-in-modern.net">Testing in Modern .NET</h3>
<p>The testing ecosystem has matured significantly. xUnit (now at version 3) is the most popular testing framework. bUnit enables unit testing of Blazor components without a browser. The dotnet test runner integrates cleanly with CI/CD pipelines.</p>
<p>In the enterprise context, the built-in dependency injection and interface-based design of ASP.NET Core make applications far more testable than classic ASP.NET applications. You can write integration tests that spin up an in-memory web server and send real HTTP requests to your API without deploying anything.</p>
<h2 id="part-4-the-broader-technology-landscape-in-2025-2026">Part 4: The Broader Technology Landscape in 2025-2026</h2>
<h3 id="ai-is-everywhere">AI Is Everywhere</h3>
<p>You cannot discuss the current technology landscape without addressing AI. Large language models like GPT-4, Claude, and Gemini have transformed software development workflows. AI coding assistants are now standard tooling, not novelties. In your daily work, this means you will increasingly use AI to help write code, debug issues, write documentation, and review pull requests.</p>
<p>For .NET developers specifically, AI integration is straightforward. The Microsoft.Extensions.AI libraries provide standardized interfaces for connecting to AI services from .NET code. Whether you are building an internal tool that uses AI to summarize documents, a customer-facing chatbot, or an application that uses AI for data analysis, the .NET ecosystem has mature support.</p>
<h3 id="cloud-native-is-the-default">Cloud-Native Is the Default</h3>
<p>Modern enterprise software is increasingly designed to run in containers on Kubernetes or similar orchestrators. .NET 10 has excellent container support, with tiny container images (especially with Native AOT) and built-in health check endpoints that integrate with Kubernetes liveness and readiness probes.</p>
<p>Even if your current applications run on dedicated servers or VMs, understanding containers is important because it is where the industry is heading. The good news is that containerizing a .NET application is straightforward and often requires only adding a Dockerfile.</p>
<h3 id="open-source-is-the-norm">Open Source Is the Norm</h3>
<p>.NET itself is fully open-source under the MIT license. The entire runtime, compiler, libraries, and most of the ASP.NET framework are developed in the open on GitHub. This is a dramatic shift from the proprietary, Windows-only .NET Framework era.</p>
<p>For enterprise developers, this means you can read the source code of the framework itself when debugging issues. You can file issues and even contribute fixes. And you can be confident that the platform will not be abandoned because the community can maintain it independently if necessary.</p>
<h2 id="part-5-how-to-approach-migration">Part 5: How to Approach Migration</h2>
<h3 id="do-not-boil-the-ocean">Do Not Boil the Ocean</h3>
<p>The most important advice for migrating from .NET Framework 4.7 to .NET 10 is: do not try to migrate everything at once. Start with a new microservice or a smaller, less critical application. Build your team's familiarity with the new platform on a project where the stakes are lower.</p>
<h3 id="use-the.net-upgrade-assistant">Use the .NET Upgrade Assistant</h3>
<p>Microsoft provides a tool called the .NET Upgrade Assistant that automates much of the mechanical migration work. It can update project files, convert Web.config settings to appsettings.json, update NuGet package references, and flag code that uses APIs not available in modern .NET. It is not perfect, but it handles the tedious parts so your team can focus on the genuinely complex migration decisions.</p>
<h3 id="identify-breaking-changes-early">Identify Breaking Changes Early</h3>
<p>Some .NET Framework APIs do not exist in modern .NET. The most common pain points are Windows-specific APIs (like System.Drawing on Linux), some WCF service features (replaced by gRPC or REST), and certain AppDomain behaviors. The .NET Portability Analyzer tool can scan your existing code and generate a report of compatibility issues.</p>
<h3 id="plan-for-nuget-package-updates">Plan for NuGet Package Updates</h3>
<p>Many NuGet packages have different versions for .NET Framework and modern .NET. Some packages you depend on may not have been updated at all. Audit your dependencies early and identify any that need replacements.</p>
<h3 id="embrace-the-new-patterns-gradually">Embrace the New Patterns Gradually</h3>
<p>You do not need to rewrite your application to use minimal APIs, top-level statements, and every new C# feature on day one. Modern .NET supports the controller-based MVC pattern you are familiar with. Start with a project structure that feels comfortable, then adopt new patterns as your team gains confidence.</p>
<h2 id="part-6-why-this-is-worth-doing">Part 6: Why This Is Worth Doing</h2>
<p>If you have read this far, you might be wondering whether this migration is worth the effort and risk. Here is the honest answer: yes, unequivocally.</p>
<p><strong>Performance</strong>: Your applications will run faster and use less memory. In enterprise contexts with thousands of users, this translates to real cost savings on infrastructure.</p>
<p><strong>Security</strong>: .NET Framework 4.7 receives only critical security patches. Modern .NET receives active security development with new features like built-in rate limiting, improved cryptography, and regularly updated TLS support.</p>
<p><strong>Developer productivity</strong>: Modern C# features, better tooling, and built-in dependency injection make developers measurably more productive. Code reviews go faster because the code is more readable. Bugs are caught earlier because the compiler is smarter.</p>
<p><strong>Hiring</strong>: New .NET developers coming out of bootcamps and university programs learn modern .NET. Requiring .NET Framework experience narrows your hiring pool to increasingly senior developers.</p>
<p><strong>Cross-platform</strong>: Your applications can run on Linux servers (which are cheaper to operate than Windows Server) and in lightweight containers. You are no longer locked into Windows Server licensing.</p>
<p><strong>Ecosystem momentum</strong>: All new .NET libraries, frameworks, and tools target modern .NET. Staying on .NET Framework means an increasingly stale dependency graph.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The jump from .NET Framework 4.7 to .NET 10 is large. There is no sugarcoating that. But every piece of the puzzle — the language improvements, the performance gains, the cross-platform support, the modern tooling, the open-source ecosystem — represents a genuine improvement in your ability to build and maintain quality enterprise software.</p>
<p>You do not need to make this jump in a weekend. You do not need to rewrite everything. But you do need to start. Pick a small project. Install the .NET 10 SDK. Create a new application with <code>dotnet new webapi</code>. Run it. Explore. And when you are ready, use the Upgrade Assistant on something real.</p>
<p>The .NET platform has never been in a better position than it is today. The same C# skills that have served you well for years still apply — they just apply to a faster, more capable, more modern foundation.</p>
<p>Welcome to the future of .NET. It has been waiting for you.</p>
]]></content:encoded>
      <category>dotnet</category>
      <category>blazor</category>
      <category>aspnet</category>
      <category>enterprise</category>
      <category>migration</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The ASP.NET Request Lifecycle: Why Cold Starts Are Slow and How .NET 10 Changes Everything</title>
      <link>https://observermagazine.github.io/blog/aspnet-lifecycle-deep-dive</link>
      <description>A deep dive into the ASP.NET request lifecycle across both .NET Framework and modern .NET 10, explaining why cold starts have historically been slow, what you can do about it, and how Native AOT and other advances have fundamentally changed the equation.</description>
      <pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/aspnet-lifecycle-deep-dive</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>If you have ever deployed an ASP.NET application and noticed that the very first request takes seconds — sometimes tens of seconds — while subsequent requests are blazing fast, you have experienced the infamous &quot;cold start&quot; problem. This post breaks down the entire ASP.NET request lifecycle, explains where that cold start time goes, and shows how modern .NET (up through .NET 10) has systematically attacked this problem from every angle.</p>
<h2 id="part-1-the-classic-asp.net-framework-request-lifecycle">Part 1: The Classic ASP.NET Framework Request Lifecycle</h2>
<p>To understand why cold starts are slow, you first need to understand what happens when a request arrives at an ASP.NET Framework application running on IIS.</p>
<h3 id="the-iis-pipeline">The IIS Pipeline</h3>
<p>When IIS receives an HTTP request, it goes through a series of stages before your code ever runs. In Integrated Pipeline Mode (the default since IIS 7), the request flows through a unified pipeline of native IIS modules and managed ASP.NET modules. The key stages are:</p>
<p><strong>BeginRequest</strong> is where the pipeline starts. IIS determines which application pool should handle the request and routes it accordingly. If the application pool's worker process (w3wp.exe) is not running — because the pool was recycled or the app was idle — IIS must spin up an entirely new process. This is the first major source of cold start latency.</p>
<p><strong>AuthenticateRequest and AuthorizeRequest</strong> handle identity and permissions. These stages load authentication modules (Windows Auth, Forms Auth, etc.) and can involve talking to Active Directory or a database.</p>
<p><strong>ResolveRequestCache</strong> checks whether a cached response exists. On a cold start, the cache is empty, so this is a no-op that adds no benefit.</p>
<p><strong>MapRequestHandler</strong> determines which handler processes the request. For MVC, this involves the routing engine matching a URL pattern to a controller and action. For Web Forms, this maps to a .aspx page handler.</p>
<p><strong>ExecuteRequestHandler</strong> is where your actual application code runs — your controller action, your page lifecycle, your business logic. On a cold start, this is where the bulk of the delay happens because of JIT compilation and dependency initialization (more on this below).</p>
<p><strong>UpdateRequestCache</strong> stores the response for future cache hits.</p>
<p><strong>EndRequest</strong> performs cleanup and sends the response.</p>
<h3 id="the-asp.net-page-lifecycle-web-forms">The ASP.NET Page Lifecycle (Web Forms)</h3>
<p>If your application uses Web Forms, the ExecuteRequestHandler stage triggers a complex page lifecycle of its own: Init, LoadViewState, Load, PostBack event handling, PreRender, SaveViewState, Render, and Unload. Each of these stages can involve control tree construction, viewstate deserialization, and dynamic compilation of .aspx and .ascx files. On the first request, every page and user control must be compiled from markup into a .NET class, compiled to IL, and then JIT-compiled to native code. This is why a complex Web Forms application can take minutes on its very first request.</p>
<h3 id="the-asp.net-mvc-lifecycle">The ASP.NET MVC Lifecycle</h3>
<p>MVC applications are leaner but still go through significant work on cold start. Routing tables must be built from your RouteConfig (or attribute routes). Controller factories and dependency injection containers must be constructed. The Razor view engine must locate, parse, compile, and JIT-compile every .cshtml file the first time it is accessed. Area registrations, filter providers, model binders, and value providers all need initialization.</p>
<h2 id="part-2-why-is-the-cold-start-so-slow-in.net-framework">Part 2: Why Is the Cold Start So Slow in .NET Framework?</h2>
<p>The cold start slowness in classic .NET Framework comes from several compounding factors.</p>
<h3 id="jit-compilation">1. JIT Compilation</h3>
<p>.NET Framework applications ship as Intermediate Language (IL) bytecode. When a method is called for the first time, the CLR's Just-In-Time compiler translates it to native machine code. This happens method-by-method, on demand. On a cold start, virtually every method in your application's startup path must be JIT-compiled: your Global.asax, your DI container setup, your routing configuration, your first controller, your first Razor view, and every framework method those call into. For a large application with hundreds of types, this can take seconds of raw CPU time.</p>
<h3 id="assembly-loading">2. Assembly Loading</h3>
<p>The CLR must locate and load assemblies from disk. .NET Framework applications often have dozens of DLLs in their bin folder — your code, NuGet packages, framework libraries. Each DLL must be found on disk, read into memory, and have its metadata parsed. On a traditional spinning hard drive (still common in older server environments), this I/O alone can add hundreds of milliseconds. Even on SSDs, loading 50-100 assemblies sequentially adds up.</p>
<h3 id="iis-application-pool-recycling">3. IIS Application Pool Recycling</h3>
<p>By default, IIS recycles application pools every 1740 minutes (29 hours) and shuts them down after 20 minutes of inactivity. When a pool recycles, the next request must go through the entire cold start sequence again: process creation, CLR initialization, assembly loading, JIT compilation, and application initialization. This means users regularly experience cold starts, not just after deployments.</p>
<h3 id="dynamic-compilation-of-views">4. Dynamic Compilation of Views</h3>
<p>In ASP.NET MVC on .NET Framework, Razor views (.cshtml files) are compiled at runtime by default. The Razor engine reads the file from disk, parses it into C# code, compiles the generated C# to IL, and then the CLR JIT-compiles it to native code. For an application with hundreds of views, this cascade of disk reads, parsing, and compilation is brutally slow on first access.</p>
<h3 id="heavy-initialization-in-global.asax">5. Heavy Initialization in Global.asax</h3>
<p>Classic ASP.NET applications perform massive amounts of work in Application_Start: registering routes, configuring dependency injection, setting up Entity Framework models, loading configuration, initializing logging frameworks, building AutoMapper profiles, and more. All of this runs synchronously before the first request can be served. A complex enterprise application might spend 5-30 seconds in Application_Start alone.</p>
<h3 id="entity-framework-model-compilation">6. Entity Framework Model Compilation</h3>
<p>Entity Framework (especially versions 4 through 6) must build an in-memory model of your entire database schema the first time a DbContext is used. For large schemas with hundreds of tables and complex relationships, this model compilation can take several seconds. Combined with JIT compilation of EF's own code, the first database query often takes 10-50x longer than subsequent queries.</p>
<h2 id="part-3-mitigations-for.net-framework-cold-starts">Part 3: Mitigations for .NET Framework Cold Starts</h2>
<p>Developers have historically used several strategies to reduce cold start pain on .NET Framework.</p>
<h3 id="pre-compilation">Pre-compilation</h3>
<p>The <code>aspnet_compiler.exe</code> tool can pre-compile all views and pages at build time rather than at runtime. Combined with <code>aspnet_merge.exe</code> (which merges the resulting assemblies into a smaller number of DLLs), this eliminates runtime view compilation entirely. You can enable this in MSBuild with <code>/p:PrecompileBeforePublish=true /p:UseMerge=true</code>.</p>
<h3 id="ngen-native-image-generator">NGen (Native Image Generator)</h3>
<p>Running <code>ngen install</code> on your assemblies produces native images that bypass JIT compilation. The CLR loads the pre-compiled native code directly instead of JIT-compiling IL. However, NGen images are machine-specific, fragile (they're invalidated when dependencies change), and don't benefit from runtime profile-guided optimization. Still, for cold starts, NGen can reduce startup time by 30-60%.</p>
<h3 id="iis-application-initialization-module">IIS Application Initialization Module</h3>
<p>The IIS Application Initialization module (available since IIS 8) sends a synthetic request to your application immediately when the app pool starts, rather than waiting for the first real user request. Combined with the &quot;AlwaysRunning&quot; start mode for the application pool, this ensures the cold start happens in the background before any user is affected.</p>
<h3 id="reducing-idle-timeout-and-recycling-frequency">Reducing Idle Timeout and Recycling Frequency</h3>
<p>Setting the IIS idle timeout to 0 (never timeout) and extending or disabling periodic recycling prevents the application from shutting down between requests. This trades memory for availability.</p>
<h3 id="warm-up-scripts">Warm-up Scripts</h3>
<p>Many teams write HTTP health-check scripts that hit key endpoints after deployment, forcing JIT compilation and cache population before real traffic arrives. This is a brute-force approach but effective.</p>
<h3 id="pre-building-singletons">Pre-building Singletons</h3>
<p>Instead of lazily constructing singletons during the first request, you can eagerly resolve all registered singleton services during startup. This front-loads the DI container work so the first real request does not pay the price.</p>
<h2 id="part-4-the-modern.net-lifecycle.net-6-through.net-10">Part 4: The Modern .NET Lifecycle (.NET 6 through .NET 10)</h2>
<p>Modern .NET (the cross-platform runtime, not .NET Framework) has fundamentally restructured the application lifecycle. Understanding the differences helps explain why cold starts are dramatically better.</p>
<h3 id="the-minimal-hosting-model">The Minimal Hosting Model</h3>
<p>Starting with .NET 6 and refined through .NET 10, the application entry point is a simple <code>Program.cs</code> with a <code>WebApplicationBuilder</code>. There is no more Global.asax, no Startup class split into ConfigureServices and Configure, no complex lifecycle of OWIN middleware registration. The pipeline is built declaratively:</p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);
builder.Services.AddRazorPages();

var app = builder.Build();
app.UseRouting();
app.MapRazorPages();
app.Run();
</code></pre>
<p>This minimal model does less work at startup because the framework itself is more modular. You only pay for what you use.</p>
<h3 id="kestrel-instead-of-iis">Kestrel Instead of IIS</h3>
<p>Modern ASP.NET Core applications run on Kestrel, a lightweight, cross-platform HTTP server written from scratch for performance. Kestrel does not have IIS's application pool recycling behavior, idle timeouts, or heavy process management overhead. When deployed behind a reverse proxy (NGINX, YARP, or even IIS as a reverse proxy via ANCM), the application process stays alive continuously.</p>
<h3 id="razor-view-compilation-at-build-time">Razor View Compilation at Build Time</h3>
<p>Since .NET Core 3.0, Razor views and pages are compiled at build time by default. The <code>Microsoft.NET.Sdk.Razor</code> SDK compiles .cshtml files into C# classes and then into IL during <code>dotnet build</code>, not at runtime. This completely eliminates the runtime view compilation that plagued .NET Framework.</p>
<h3 id="tiered-compilation">Tiered Compilation</h3>
<p>Introduced in .NET Core 3.0 and enabled by default since, Tiered Compilation replaces the single-pass JIT with a two-tier approach. Tier 0 (&quot;Quick JIT&quot;) compiles methods very fast but produces lower-quality code. After a method has been called enough times, the runtime recompiles it at Tier 1 with full optimizations. The result: methods are available almost instantly on first call (much faster than the old full-optimization JIT), and hot methods eventually reach peak performance. For cold starts, Tiered Compilation dramatically reduces the time spent in JIT.</p>
<h3 id="readytorun-r2r">ReadyToRun (R2R)</h3>
<p>ReadyToRun is a form of ahead-of-time compilation available since .NET Core 3.0. When you publish with <code>&lt;PublishReadyToRun&gt;true&lt;/PublishReadyToRun&gt;</code>, the compiler pre-compiles IL to native code for the target platform. Unlike NGen, R2R images are portable across machines with the same OS and architecture. The CLR can load R2R code directly, bypassing Tier 0 JIT entirely. In serverless and containerized environments, R2R typically reduces cold start time by 30-80%.</p>
<h3 id="trimming">Trimming</h3>
<p>IL trimming (enabled with <code>&lt;PublishTrimmed&gt;true&lt;/PublishTrimmed&gt;</code>) removes unused code from your application and its dependencies at publish time. A smaller application means fewer assemblies to load and less code to JIT-compile (if any). This is particularly impactful in Blazor WebAssembly, where the trimmed application must be downloaded to the browser.</p>
<h2 id="part-5.net-10-and-native-aot-the-cold-start-killer">Part 5: .NET 10 and Native AOT — The Cold Start Killer</h2>
<p>.NET 10, released as an LTS release in late 2025, represents the most significant advancement in cold start performance since .NET's creation.</p>
<h3 id="native-aot-compilation">Native AOT Compilation</h3>
<p>Native Ahead-of-Time compilation (<code>&lt;PublishAot&gt;true&lt;/PublishAot&gt;</code>) compiles your entire application to a native binary at publish time. There is no IL, no JIT compiler, no CLR runtime to initialize. The resulting binary is a self-contained native executable that starts like a C program.</p>
<p>The performance difference is staggering. Benchmarks show startup times dropping from hundreds of milliseconds to single-digit milliseconds for minimal APIs. One production report documented startup dropping from 70ms to 14ms — an 80% reduction — with memory usage cut by more than 50%. In serverless environments like AWS Lambda, cold start improvements of up to 86% have been measured.</p>
<p>Native AOT achieves this by eliminating several entire categories of cold start work: there is no JIT compilation (code is already native), no IL metadata loading, no tiered compilation infrastructure, and the binary includes only the code your application actually uses (aggressive tree shaking). The resulting binary for a minimal API console app is around 1 MB in .NET 10, down from several MB in .NET 7.</p>
<h3 id="the-trade-offs">The Trade-offs</h3>
<p>Native AOT is not free. It imposes constraints that you must design around:</p>
<p><strong>No runtime reflection</strong> — You cannot use <code>Type.GetType()</code>, <code>Activator.CreateInstance()</code>, or other reflection APIs that depend on metadata that has been stripped away. This means libraries like traditional Entity Framework (which relies heavily on reflection), many DI containers, and AutoMapper in its default configuration do not work with Native AOT.</p>
<p><strong>Source generators required</strong> — Instead of reflection, .NET 10 uses compile-time source generators. <code>System.Text.Json</code> requires <code>[JsonSerializable]</code> attributes to generate serialization code at compile time. DI containers must use compile-time registration.</p>
<p><strong>Platform-specific binaries</strong> — A Native AOT binary compiled on Linux x64 runs only on Linux x64. You need separate publish steps for each target platform.</p>
<p><strong>Longer publish times</strong> — The native compiler takes significantly longer than <code>dotnet publish</code> without AOT, because it must compile and optimize the entire application.</p>
<p><strong>Potentially lower peak throughput</strong> — The JIT compiler can use runtime profiling data to optimize hot paths in ways the AOT compiler cannot. For long-running server applications, JIT-compiled code may achieve higher steady-state requests per second than AOT-compiled code. You trade peak throughput for instant startup.</p>
<h3 id="selective-aot-in.net-10">Selective AOT in .NET 10</h3>
<p>.NET 10 introduces the ability to AOT-compile specific performance-critical assemblies while keeping the rest JIT-compiled. This hybrid approach lets you optimize startup-critical paths with AOT while retaining the flexibility and peak performance of JIT for the rest of your application.</p>
<h3 id="createslimbuilder">CreateSlimBuilder</h3>
<p>For Native AOT scenarios, .NET 10 provides <code>WebApplication.CreateSlimBuilder()</code>, a minimal builder that excludes services not compatible with AOT (like the full MVC framework). This produces even smaller, faster binaries for API-only workloads.</p>
<h3 id="blazor-webassembly-and-aot">Blazor WebAssembly and AOT</h3>
<p>Blazor WebAssembly benefits from AOT as well. The <code>&lt;WasmStripILAfterAOT&gt;true&lt;/WasmStripILAfterAOT&gt;</code> property in .NET 10 removes IL from the WASM bundle after AOT compilation, producing significantly smaller downloads. Combined with Blazor's 76% smaller JavaScript bundles in .NET 10, the initial load time for Blazor WASM applications has improved dramatically.</p>
<h3 id="maui-and-mobile-native-aot">MAUI and Mobile Native AOT</h3>
<p>.NET 10 extends Native AOT support to Android (with measured startup improvements from 1+ seconds with Mono AOT down to 271-331ms) and continues existing iOS/Mac Catalyst AOT support. Windows App SDK is expected to gain Native AOT support shortly after the .NET 10 release.</p>
<h2 id="part-6-the-modern-asp.net-core-request-pipeline-in.net-10">Part 6: The Modern ASP.NET Core Request Pipeline in .NET 10</h2>
<p>With all these compilation advances in mind, here is what the modern .NET 10 request lifecycle looks like:</p>
<h3 id="application-startup">Application Startup</h3>
<ol>
<li><p><strong>Process start</strong> — The native binary (if using AOT) or dotnet runtime loads the application. With Native AOT, this is nearly instant. With JIT + R2R, Tiered Compilation ensures Quick JIT handles initial methods in microseconds.</p>
</li>
<li><p><strong>Host configuration</strong> — <code>WebApplicationBuilder</code> reads configuration from appsettings.json, environment variables, and other providers. The DI container is built with all registered services.</p>
</li>
<li><p><strong>Middleware pipeline construction</strong> — The middleware pipeline is built in the order you specified. Each <code>Use*</code> call adds a delegate to a chain. The pipeline is constructed once and reused for all requests.</p>
</li>
<li><p><strong>Server start</strong> — Kestrel begins listening on configured ports.</p>
</li>
</ol>
<h3 id="per-request-flow">Per-Request Flow</h3>
<p>Once the application is running, each request flows through the middleware pipeline:</p>
<ol>
<li><p><strong>Kestrel receives the connection</strong> — HTTP parsing happens in optimized, allocation-free code using <code>System.IO.Pipelines</code> and <code>Span&lt;T&gt;</code>.</p>
</li>
<li><p><strong>Middleware pipeline executes</strong> — Each middleware gets a chance to handle the request or pass it to the next middleware. Common middleware includes exception handling, HTTPS redirection, static files, routing, authentication, authorization, and CORS.</p>
</li>
<li><p><strong>Routing</strong> — The routing middleware matches the request URL to an endpoint. In .NET 10, the routing system uses a highly optimized trie-based data structure that matches endpoints in near-constant time regardless of how many routes are registered.</p>
</li>
<li><p><strong>Endpoint execution</strong> — The matched endpoint runs. For minimal APIs, this is a simple delegate. For MVC controllers, this involves model binding, action filters, action execution, result filters, and result execution. For Razor Pages, the page handler executes.</p>
</li>
<li><p><strong>Response writing</strong> — The response flows back through the middleware pipeline in reverse order, allowing each middleware to modify headers or the response body.</p>
</li>
</ol>
<h2 id="part-7-practical-recommendations">Part 7: Practical Recommendations</h2>
<p>Based on everything above, here is what you should do depending on your situation.</p>
<h3 id="if-you-are-still-on.net-framework">If you are still on .NET Framework</h3>
<p>Migrate. Seriously. The performance, security, and ecosystem benefits of modern .NET are enormous, and .NET Framework 4.8 is in maintenance mode with no new features. If migration is not immediately possible, use pre-compilation, NGen, IIS Application Initialization, and disable idle timeouts.</p>
<h3 id="if-you-are-on.net-68-and-cold-starts-matter">If you are on .NET 6/8 and cold starts matter</h3>
<p>Publish with ReadyToRun (<code>&lt;PublishReadyToRun&gt;true&lt;/PublishReadyToRun&gt;</code>). Enable trimming if your dependency graph supports it. Consider Native AOT if your application uses minimal APIs and avoids heavy reflection. Evaluate your startup code for unnecessary synchronous work that can be deferred or made asynchronous.</p>
<h3 id="if-you-are-starting-a-new-project-on.net-10">If you are starting a new project on .NET 10</h3>
<p>Design for Native AOT from day one. Use <code>[JsonSerializable]</code> for all JSON types. Avoid reflection-based libraries. Use source generators wherever possible. Test AOT compatibility early with <code>&lt;IsAotCompatible&gt;true&lt;/IsAotCompatible&gt;</code>. Use <code>dotnet publish</code> with AOT regularly during development to catch compatibility issues before they accumulate. Take advantage of the new SLNX solution format, Directory.Build.props for shared configuration, and central package management for clean project organization.</p>
<h3 id="for-blazor-webassembly-specifically">For Blazor WebAssembly specifically</h3>
<p>Enable AOT compilation and IL stripping. Use lazy loading for assemblies not needed on the initial page. Keep your dependency graph lean — every NuGet package adds to the download size. Pre-render on the server if possible (Blazor Server or Blazor United) to give users an instant first paint while the WASM runtime downloads in the background.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The ASP.NET cold start problem was real and painful for over a decade. It was caused by a perfect storm of just-in-time compilation, dynamic view compilation, heavy framework initialization, and IIS process management. Modern .NET has attacked each of these causes systematically: Tiered Compilation and ReadyToRun reduce JIT overhead, build-time view compilation eliminates runtime Razor compilation, the minimal hosting model reduces initialization work, and Kestrel eliminates IIS recycling. Native AOT in .NET 10 goes even further by eliminating JIT entirely, producing native binaries with startup times measured in milliseconds rather than seconds.</p>
<p>The result is that a well-optimized .NET 10 application can cold-start faster than most Node.js or Python applications — a dramatic reversal from the .NET Framework era. The ecosystem has matured, the tooling is excellent, and the migration path from .NET 8 LTS to .NET 10 LTS is smooth. If cold starts have been holding you back from .NET, it is time to take another look.</p>
]]></content:encoded>
      <category>dotnet</category>
      <category>aspnet</category>
      <category>performance</category>
      <category>lifecycle</category>
      <category>aot</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Hello, world!</title>
      <link>https://observermagazine.github.io/blog/hello-world</link>
      <description>In which I say Hello to you</description>
      <pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/hello-world</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="hello-and-welcome">Hello, and welcome</h2>
<p>Welcome to Observer Magazine.
It is great to have you with me here.
I hope you enjoy this website.</p>
<p>I have updated the nuget packages.
I would love to hear your thoughts about this magazine.</p>
]]></content:encoded>
      <category>introductions</category>
    </item>
    <item>
      <title>Responsive Design Patterns in Blazor</title>
      <link>https://observermagazine.github.io/blog/responsive-design-patterns</link>
      <description>How we built mobile-friendly data tables and master-detail layouts in pure Blazor.</description>
      <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/responsive-design-patterns</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="the-challenge">The Challenge</h2>
<p>Data-heavy UIs are notoriously hard to make responsive. Wide tables overflow on small screens, and complex layouts need fundamentally different structures on mobile vs. desktop.</p>
<h2 id="responsive-tables">Responsive Tables</h2>
<p>Our approach uses CSS to transform table rows into stacked cards on small screens:</p>
<ul>
<li>On desktop: a traditional <code>&lt;table&gt;</code> with sortable column headers</li>
<li>On mobile: each row becomes a card with label-value pairs</li>
</ul>
<p>The key CSS trick is using <code>data-label</code> attributes on <code>&lt;td&gt;</code> elements and displaying them via <code>::before</code> pseudo-elements when the table header is hidden.</p>
<h2 id="master-detail-flow">Master-Detail Flow</h2>
<p>The master-detail pattern uses CSS Grid:</p>
<ul>
<li>On desktop: a two-column layout (list on left, details on right)</li>
<li>On mobile: the columns stack vertically, with the list on top</li>
</ul>
<p>No JavaScript media queries needed — it's all pure CSS with Blazor handling the state.</p>
<h2 id="key-takeaways">Key Takeaways</h2>
<ol>
<li><strong>Use semantic HTML</strong> — <code>&lt;table&gt;</code> for tabular data, not divs pretending to be tables.</li>
<li><strong>CSS does the heavy lifting</strong> — Blazor components stay clean; responsiveness lives in the stylesheet.</li>
<li><strong>Test on real devices</strong> — Emulators are fine for development, but nothing beats a real phone.</li>
</ol>
<p>See all these patterns live on the <a href="/showcase">Showcase page</a>.</p>
]]></content:encoded>
      <category>blazor</category>
      <category>css</category>
      <category>responsive</category>
      <category>ui</category>
    </item>
    <item>
      <title>Getting Started with Blazor WebAssembly</title>
      <link>https://observermagazine.github.io/blog/getting-started-with-blazor-wasm</link>
      <description>A quick tour of how Blazor WASM works and why it's a great choice for static sites.</description>
      <pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/getting-started-with-blazor-wasm</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="what-is-blazor-webassembly">What is Blazor WebAssembly?</h2>
<p>Blazor WebAssembly (WASM) lets you build interactive web UIs using C# instead of JavaScript. Your .NET code runs directly in the browser via WebAssembly — no plugins, no server needed at runtime.</p>
<h2 id="why-we-chose-it">Why We Chose It</h2>
<p>For Observer Magazine, Blazor WASM is ideal because:</p>
<ul>
<li><strong>Static hosting</strong> — The compiled output is plain HTML, CSS, JS, and WASM files. Perfect for GitHub Pages.</li>
<li><strong>Full .NET ecosystem</strong> — We use the same language, tooling, and libraries as backend .NET developers.</li>
<li><strong>Performance</strong> — After the initial download, navigation is instant. The runtime is ahead-of-time compiled for speed.</li>
<li><strong>Testability</strong> — With bUnit, we can unit-test every component without a browser.</li>
</ul>
<h2 id="project-structure">Project Structure</h2>
<p>Our project follows a clean layout:</p>
<pre><code>src/ObserverMagazine.Web/     — The Blazor WASM app
tools/ContentProcessor/        — Build-time markdown processor
tests/                         — xUnit + bUnit tests
content/blog/                  — Markdown blog posts
</code></pre>
<p>The <code>ContentProcessor</code> runs at build time (in CI) to convert Markdown files into JSON and HTML that the Blazor app fetches at runtime.</p>
<h2 id="next-steps">Next Steps</h2>
<p>Check out the <a href="/showcase">Showcase</a> to see responsive tables and master-detail flows in action, or browse the <a href="https://github.com/ObserverMagazine/observermagazine.github.io">source code</a> to see how everything fits together.</p>
]]></content:encoded>
      <category>blazor</category>
      <category>dotnet</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Welcome to Observer Magazine</title>
      <link>https://observermagazine.github.io/blog/welcome-to-observer-magazine</link>
      <description>Our first post — introducing Observer Magazine and what we're building.</description>
      <pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate>
      <guid>https://observermagazine.github.io/blog/welcome-to-observer-magazine</guid>
      <author>hello@observermagazine.example (Observer Team)</author>
      <content:encoded><![CDATA[<h2 id="hello-world">Hello, World!</h2>
<p>Welcome to <strong>Observer Magazine</strong>, a free and open-source web application built with Blazor WebAssembly on .NET 10.</p>
<p>This project serves two purposes:</p>
<ol>
<li><strong>A learning resource</strong> for developers exploring Blazor WASM, modern .NET tooling (slnx, Directory.Build.props, central package management), and static site deployment on GitHub Pages.</li>
<li><strong>A starting point</strong> you can fork and adapt for your own projects — whether that's a personal blog, a product showcase, or a full SaaS application.</li>
</ol>
<h2 id="whats-inside">What's Inside</h2>
<ul>
<li>A responsive, accessible UI built entirely in C# and Razor</li>
<li>A blog engine powered by Markdown files with YAML front matter</li>
<li>An auto-generated RSS feed</li>
<li>Showcases of common web patterns: responsive tables, master-detail flows</li>
<li>Structured logging ready for OpenTelemetry</li>
<li>A full test suite using xUnit v3 and bUnit</li>
</ul>
<h2 id="philosophy">Philosophy</h2>
<p>Every dependency we use is truly free — no &quot;free for non-commercial&quot; restrictions. We will never charge money for this software. The code is AGPLv3-licensed and always will be.</p>
<p>Stay tuned for more posts!</p>
]]></content:encoded>
      <category>announcement</category>
      <category>introduction</category>
    </item>
  </channel>
</rss>