25 Essential C# Tips

Whether you’re just starting with C# or are an experienced engineer, following best practices can dramatically improve your code’s clarity, safety, and performance. In this guide, we expand on 25 essential C# tips with detailed explanations, real-world examples, and references to modern tools (like Visual Studio 2022) and .NET 8+ features. Each tip not only covers the “how,” but also the “why,” ensuring you understand how these practices fit into maintainable and scalable software development.

1. Use var Wisely for Clear, Type-Safe Code

C# allows local type inference using the var keyword. This can reduce verbosity, but it should be used only when the type is obvious from context. Overusing var in situations where the type isn’t clear can harm readability. Striking the right balance improves code maintainability:

  • When to use var: If the right-hand side makes the type evident. For example, var stream = new FileStream(path, FileMode.Open) is clear that stream is a FileStream. Using var here avoids redundancy (the type is stated twice otherwise) and keeps code flexible if the type changes (refactoring becomes easier).
  • When to avoid var: If the initialization expression doesn’t reveal the type. For instance, var data = FetchData(); might be unclear if one doesn’t know what FetchData() returns. In such cases, explicitly declaring the type (e.g., List<string> data = FetchData();) makes code more readable.

Example – Good vs Bad Usage:

// Clear use of var - type is obvious (List<string>)
var names = new List<string> { "Alice", "Bob", "Charlie" };

// Unclear use of var - what type is result?
var result = processor.Process(input);  // Avoid if the return type isn't obvious

// Better: explicitly state the type if not apparent
ProcessResult result = processor.Process(input);

In Visual Studio 2022, code analyzers (configurable via .editorconfig) can enforce your team’s preference for var usage (rules IDE0007 and IDE0008). This ensures consistency: for example, analyzers can warn you if you use var when the type isn’t apparent. By using var judiciously, you keep code concise but still immediately understandable.

2. Favor String Interpolation Over Concatenation or string.Format

Building strings is a common task, and C# offers multiple ways to do it. String interpolation (introduced in C# 6) provides a readable and efficient syntax for composing strings. It is generally preferred over older approaches like string.Format or manual concatenation:

  • Readability: Interpolation allows you to embed variables directly in a string literal using the $"..." syntax. This often makes the code resemble natural language, improving clarity. For example: string name = "Alice"; int count = 5; string message = $"{name} has {count} new messages."; This is easier to read than string.Format("{0} has {1} new messages.", name, count) or concatenation name + " has " + count + " new messages.".
  • Safety: Because interpolation is resolved at compile time, you avoid runtime errors from mismatched format placeholders. Also, refactorings (like renaming a variable) will update inside an interpolated string, whereas with string.Format you’d risk mistyped placeholders.
  • Performance: Modern C# compilers optimize interpolated strings. In fact, C# 10+ introduced the DefaultInterpolatedStringHandler which can minimize allocations. Interpolation typically allocates less extra memory than string.Format. For instance, $"{firstName} {lastName}" is handled efficiently under the hood.

Example – Using Interpolation:

string user = "Bob";
int tasks = 3;
Console.WriteLine($"{user} has {tasks} pending tasks."); 
// Outputs: "Bob has 3 pending tasks."

By default, .NET also uses string interpolation in logging (with message templates) to avoid boxing value types. Visual Studio 2022 will even suggest using interpolation if you concatenate string literals and variables, as it’s now considered best style. In summary, use $"{}" interpolation for cleaner code and fewer mistakes.

3. Use StringBuilder for Extensive or Looping String Concatenation

While string interpolation and simple concatenation are fine for small or infrequent operations, avoid using the + operator repeatedly in loops or in large concatenations. In C#, strings are immutable, so each concatenation creates a new string object, leading to significant memory allocations and potential performance issues in hot code paths.

Use System.Text.StringBuilder when you need to construct strings iteratively (e.g., inside a loop or when assembling a large text). StringBuilder is designed for this scenario: it maintains a mutable buffer, reducing the number of allocations. This makes it much more efficient for concatenating many pieces.

Example – Inefficient vs Efficient String Construction:

// Inefficient: concatenation in a loop (creates many intermediate strings)
string csv = "";
foreach (var item in items)
{
    csv += item + ",";  // Avoid: each += creates a new string
}

// Efficient: using StringBuilder in a loop
var sb = new System.Text.StringBuilder();
foreach (var item in items)
{
    sb.Append(item).Append(',');
}
string csvResult = sb.ToString();

In the bad example above, if items has 1000 elements, that loop will create 1000 interim strings. The StringBuilder version builds the result in a single buffer and then produces one string at the end – a drastic performance improvement (often tens of times faster for large loops).

As a rule of thumb, if you’re concatenating in a loop or constructing a very large string, reach for StringBuilder. This aligns with .NET best practices (and tools like Roslyn analyzers or SonarQube will flag excessive string concatenation as a performance smell). Modern .NET also has DefaultInterpolatedStringHandler which can optimize certain cases of interpolation in loops, but using StringBuilder explicitly gives you control and clarity for complex scenarios.

4. Properly Rethrow Exceptions to Preserve Stack Traces

Exception handling is crucial for robust applications. When you need to catch and rethrow an exception, do not use the throw ex; syntax (where ex is the caught exception variable). Instead, use throw; by itself to rethrow the current exception. The reason is that throw ex; resets the exception’s stack trace, making it much harder to diagnose the root cause of an error.

Bad Practice – resetting stack trace:

try 
{
    // ... code that throws
}
catch (Exception ex)
{
    Console.WriteLine(ex.Message);
    throw ex;  // ❌ This resets stack trace
}

In the above, the new stack trace would start at the throw statement, hiding where the exception originally occurred.

Good Practice – preserving stack trace:

try 
{
    // ... code that throws
}
catch (Exception ex)
{
    Log.Error(ex);
    throw;  // ✔ Rethrow without altering the original stack trace
}

Using throw; preserves all the original stack information. The only time you wouldn’t use a bare throw is if you need to throw a different exception type or add context. In those cases, consider wrapping the exception (e.g., throw a custom exception and pass the original as an InnerException). For example:

catch (SqlException ex)
{
    throw new DataAccessException("Failed to fetch records", ex);  // preserve original as InnerException
}

This way you add context but still keep the original exception details for debugging.

Visual Studio 2022’s code analysis will warn if you use throw ex (since it’s a well-known .NET antipattern). Following this tip ensures that when things go wrong, you have the full diagnostics to quickly identify and fix issues.

5. Catch Specific Exceptions and Avoid Swallowing Errors

When handling exceptions, be as specific as possible with what you catch. Catching general exceptions (like catch (Exception) or, worse, catch { } with no type) is usually discouraged. Broad catches can mask errors you didn’t intend to handle, making debugging and error handling harder.

Best Practices:

  • Catch specific exceptions: If you expect a FileNotFoundException or a FormatException, catch those types explicitly. This way, you only handle the errors you know how to handle, and unexpected issues bubble up (possibly to a higher-level handler or global exception middleware) where they can be logged or handled gracefully. For example: try { var text = File.ReadAllText(path); data = JsonSerializer.Deserialize<Data>(text); } catch (FileNotFoundException ex) { Console.Error.WriteLine($"File not found: {ex.FileName}"); // handle missing file (perhaps prompt user or default) } catch (JsonException ex) { Console.Error.WriteLine("Invalid file format. Please check the contents."); // handle bad JSON format }
  • Avoid empty catches: Swallowing exceptions (catch { }) without at least logging them is dangerous. It suppresses errors entirely. If you truly want to ignore a specific benign exception, at minimum comment why, and consider logging it for diagnostics. It’s often better to handle it (e.g., return a default value, retry the operation, etc.) rather than ignore it.
  • Use when filters or rethrow if not handled: In some cases, you might catch Exception at a high level (say to log and wrap in a user-friendly message), but even then, prefer filtering if possible. For instance, you can do: catch (Exception ex) when (ex is not OutOfMemoryException) { Log.Error(ex); throw; // rethrow critical exceptions untouched } Here we let certain critical exceptions (like out-of-memory or thread aborts) propagate, since attempting to handle them might be futile or even unsafe.

By catching only what you can handle, you make your code’s error-handling intentional and transparent. Modern .NET libraries and frameworks (ASP.NET Core, for example) are designed with this in mind: you typically let unhandled exceptions propagate to a global handler or middleware, which will log them and return an appropriate error response. Visual Studio will also flag overly broad catches as potential issues, and tools like SonarQube rule S112 suppress empty catches as code smells.

6. Use the using Statement (or Declaration) to Manage IDisposable Resources

Whenever you work with resources like files, streams, database connections, or any object that implements IDisposable, ensure proper cleanup. The using statement is the idiomatic way in C# to automatically dispose of such resources, even if exceptions occur. This prevents resource leaks (like open file handles or memory not freed promptly).

Using Statement Block:

using (StreamReader reader = File.OpenText("data.txt"))
{
    string line;
    while ((line = reader.ReadLine()) != null)
    {
        ProcessLine(line);
    }
}  // reader.Dispose() is called automatically here, even if an exception was thrown inside the block

As shown above, once the code exits the using block, the reader is disposed (closed). This guarantees that the file handle is released even in case of errors. Without using, you’d have to remember to call reader.Dispose() or reader.Close() in a finally block manually.

Using Declaration (C# 8+): Modern C# (8.0 and later) introduced using declarations as a more concise form. You can simply write using var reader = File.OpenText("data.txt"); at the start of a scope. The object will be disposed at the end of the scope automatically, without an extra indentation level. For example:

public void ReadFile(string path)
{
    using var reader = File.OpenText(path);
    string content = reader.ReadToEnd();
    // ... reader will be disposed here at end of method
}

Using declarations reduce nesting and can make code cleaner, especially when multiple disposables are involved. Be aware that the disposal happens at the end of the scope, which might be later than in a traditional using block – usually this is fine, but in some cases you might need more control over disposal timing (in which case stick to explicit blocks).

Pitfall: Do not return disposable objects created in a using block. For instance, avoid:

public FileStream CreateFile(string path)
{
    using var fs = File.Create(path);
    // write to fs...
    return fs; // ❌ fs is disposed at end of method, so this return is a dead object
}

If you need to return a disposable, don’t dispose it in the method (transfer ownership to the caller). Alternatively, return a byte[] or another representation instead of the open stream. Visual Studio’s analyzers will catch this mistake (it’s a common gotcha).

By consistently using using, you ensure timely release of resources, which is crucial for scalable and robust .NET applications. In .NET 8’s ASP.NET Core, for example, the framework itself uses IAsyncDisposable and scoped lifetimes for many services – using await using (for asynchronous disposables) or normal using for others is essential to avoid file locks or exhausted DB connections in high-load scenarios.

7. Prefer Generic Collections (List, Dictionary, etc.) Over Arrays for Flexibility

In modern C#, the use of generic collections from System.Collections.Generic is often more convenient and safer than using raw arrays, especially for dynamic data sets. List<T> is generally preferred over an array when you need a collection that can change size or offer higher-level operations.

Why List (and others) over arrays?

  • Resizing and Dynamics: Arrays have a fixed size once created. If you need to add/remove elements frequently or don’t know the required size in advance, a List<T> is more suitable (it internally manages a resizable array).
  • Built-in Methods: Generic collections provide many useful methods out of the box. For example, List<T> has Add, Remove, Find, Sort, etc. Dictionary<TKey,TValue> provides fast lookups by key. These features can save time and reduce errors compared to manual array manipulation.
  • Type Safety: Both arrays and generic collections are type-safe (unlike old non-generic collections), but generic collections avoid some pitfalls of arrays (e.g., covariance issues with array of reference types).
  • LINQ and Integration: Generic collections implement interfaces like IEnumerable<T> and ICollection<T>, making them work seamlessly with LINQ queries and foreach loops. Arrays also do, but often you end up converting to a list for certain operations, so starting with a list can be simpler.

Example:

// Using a List instead of an array for a growing collection
var students = new List<string>();
students.Add("Alice");
students.Add("Bob");
// No need to manage capacity manually – List grows as needed

// We can easily insert or remove
students.Insert(1, "Charlie");  // Insert at index 1
students.Remove("Alice");      // Remove by value

// We also get useful properties
int count = students.Count;
bool hasBob = students.Contains("Bob");

If we used an array for the above, inserting or removing would involve manual array resizing or lots of Array.Copy calls, which is error-prone. Lists handle that internally.

There are cases where arrays are appropriate, such as low-level performance-sensitive loops or interop with APIs that specifically require arrays. But for everyday programming, favor List<T>, Dictionary<TKey,TValue>, HashSet<T>, etc. for their flexibility and rich APIs. .NET 8 and C#’s standard patterns (from LINQ to async streams) are designed to work smoothly with IEnumerable<T> and collection interfaces, so using those will make your code more idiomatic.

Finally, if you find yourself needing to return “no elements,” prefer returning an empty collection (e.g., new List<T>() or Array.Empty<T>() or Enumerable.Empty<T>() for IEnumerable)** rather than null` (see Tip #8). This way the calling code can iterate safely without null-checking.

8. Use LINQ for Cleaner and Declarative Data Processing

Language Integrated Query (LINQ) is one of C#’s powerful features for working with collections and data in a concise, declarative manner. Instead of writing complex foreach loops to filter, transform, or aggregate data, LINQ lets you express the intent more clearly. Embracing LINQ can lead to code that is both more readable and often less error-prone, as it abstracts the iteration boilerplate.

Benefits of using LINQ:

  • Conciseness: Common operations like filtering (Where), projection (Select), finding elements (FirstOrDefault, Any), aggregations (Sum, Max, etc.), and joining sequences can be done in one line, clearly indicating what is being done rather than how.
  • Readability: LINQ expressions read almost like natural language or SQL. For example: var highScores = scores.Where(s => s > 90).OrderByDescending(s => s); is easy to understand at a glance.
  • Consistency: LINQ works over any IEnumerable<T> (and IQueryable<T> for databases). You can use the same patterns for in-memory collections, XML documents, or database queries (via ORMs), making your skills transferable.
  • Immutability and Clarity: LINQ encourages a more functional style (producing new sequences rather than mutating existing collections), which can reduce bugs and make the flow of data clearer.

Example – Without LINQ vs With LINQ:

Suppose we have a list of employees and we want to get the names of developers older than 30, sorted by name:

// Without LINQ:
List<string> devNames = new List<string>();
foreach(var emp in employees)
{
    if(emp.Role == "Developer" && emp.Age > 30)
        devNames.Add(emp.Name);
}
devNames.Sort();

// With LINQ:
var devNames = employees
                .Where(e => e.Role == "Developer" && e.Age > 30)
                .Select(e => e.Name)
                .OrderBy(name => name)
                .ToList();

The LINQ query clearly states the intent: filter, project, order. It’s less code and avoids manual indexing or sorting errors. LINQ queries and methods improve code readability greatly. In modern .NET, LINQ is heavily used in conjunction with asynchronous streams (e.g., await foreach with IAsyncEnumerable<T>) and parallel processing (PLINQ) for scalability, so understanding LINQ opens doors to these advanced scenarios as well.

Note: LINQ’s declarative style can sometimes hide performance costs (deferred execution, multiple enumeration, etc.). Use it wisely – for large data or performance-critical inner loops, measure if the LINQ is efficient or if a hand-tuned loop might be needed. Also, be cautious chaining too many LINQ operations that enumerate large collections repeatedly (consider using .ToList() at appropriate points or using more efficient LINQ operators). Visual Studio’s diagnostics and Roslyn analyzers (as well as ReSharper) can warn about common pitfalls like multiple enumeration of IEnumerable sequences.

Overall, for most everyday tasks, LINQ will result in cleaner and often safer code, fitting well with the expressive style of modern C#.

9. Mark Fields as readonly to Prevent Unintended Modifications

If a class field should not change after it’s initialized (either at declaration or in the constructor), mark it as readonly. The readonly keyword in C# enforces that the field can only be assigned during initialization (in-line or in the constructor). Once constructed, the field becomes effectively immutable for the lifetime of the object.

Why use readonly:

  • Intent and Safety: It clearly signals to other developers (and the compiler) that this piece of state is not supposed to change. This can prevent accidental bugs where someone later writes to the field, not realizing it should be constant after init.
  • Thread Safety: Immutable data is much safer in multithreaded scenarios. If an object’s fields never change, you don’t need locks to read them concurrently. Marking fields readonly is a simple way to get some of these benefits for your class’s state (though full immutability also requires making any referenced objects immutable).
  • Optimization: The JIT compiler may make optimizations knowing that a readonly field won’t change (similar to how it treats constants), although the main benefit is design clarity rather than performance.

Example:

public class Player
{
    public readonly string Name;
    public readonly DateTime CreatedOn = DateTime.Now;

    public Player(string name)
    {
        this.Name = name;
        // CreatedOn is set at declaration above
    }
}

// Usage:
var p = new Player("Alice");
// p.Name = "Bob";  // Compile error: Name is readonly

In this example, Name and CreatedOn can only be set when the Player is constructed. The class design guarantees that these fields won’t be changed afterward, which is useful for values like an object’s identity or creation timestamp.

This practice ties into the broader concept of immutability in C#. .NET has embraced immutability with features like records (which are immutable by default) and with init-only setters (the init accessor in properties) introduced in C# 9. Marking fields readonly is the manual way to achieve similar results in classes. In fact, it’s considered a best practice to use readonly whenever possible for fields that do not need to change – it’s a low-effort way to prevent many classes of bugs.

Seasoned developers often make all fields private and readonly by default, relaxing these constraints only if necessary. Visual Studio can help by marking unused field mutations, and tools like Roslynator or ReSharper can even suggest adding readonly if they detect you never assign to a field after the constructor. Embracing readonly leads to code that’s easier to reason about, particularly in large codebases.

10. Leverage Dependency Injection for Modular, Testable Architecture

Dependency Injection (DI) is a design pattern and key feature in modern .NET (especially .NET Core and beyond) that facilitates loose coupling between classes. Instead of a class instantiating its dependencies directly (using new), those dependencies are provided to the class, typically via constructor parameters. In a DI environment (like an ASP.NET Core app or any project using a DI container), the container is responsible for creating and supplying these dependencies.

Why use DI:

  • Maintainability: Classes are less tied to specific implementations. For example, if your service uses IDataRepository interface, you can inject different implementations (a database repo, a mock repo for testing, etc.) without changing the service code.
  • Testability: It’s much easier to unit test classes that use DI because you can provide fake or mock dependencies. There’s no hidden internal instantiation of concrete classes; everything can be substituted. This means you can test components in isolation by injecting stubs or mocks.
  • Flexibility and Configurability: With DI, adding new implementations or changing behaviors (like switching from a local service to a remote service, or changing logging providers) often requires only configuration changes, not code changes.
  • Inversion of Control: DI frameworks (built-in or third-party like Autofac, Ninject, etc.) manage object lifetimes and creation. .NET’s built-in DI (in .NET 6/7/8) can handle singleton vs scoped vs transient lifetimes of services, disposing them appropriately. This reduces boilerplate and potential memory leaks (because the container will dispose services for you).

Example – Using DI in practice (simplified):

Suppose we have a service that sends notifications. Without DI:

// Without DI (tightly coupled)
public class NotificationService 
{
    private EmailSender _emailSender = new EmailSender();  // directly new-ing a dependency

    public void SendWelcomeEmail(User user) 
    {
        _emailSender.Send(user.Email, "Welcome", "Hello and welcome!");
    }
}

With DI:

// Define an interface for email sending
public interface IEmailSender 
{
    void Send(string to, string subject, string body);
}

// Implement the interface
public class SmtpEmailSender : IEmailSender 
{
    public void Send(string to, string subject, string body) 
    { 
        // logic to send email via SMTP
    }
}

// NotificationService uses the interface, not a concrete class
public class NotificationService 
{
    private readonly IEmailSender _emailSender;
    public NotificationService(IEmailSender emailSender) // dependency injected
    {
        _emailSender = emailSender;
    }
    public void SendWelcomeEmail(User user)
    {
        _emailSender.Send(user.Email, "Welcome", "Hello and welcome!");
    }
}

Now, in your application startup (Program.cs in .NET 6/7/8 minimal API style or Startup.cs in older ASP.NET Core):

// Register services with the DI container:
builder.Services.AddTransient<IEmailSender, SmtpEmailSender>();
builder.Services.AddTransient<NotificationService>();

The DI container (built into .NET) will automatically inject an SmtpEmailSender whenever IEmailSender is needed, and it will inject NotificationService (and supply an IEmailSender to its constructor) wherever needed. In Visual Studio 2022, templates for ASP.NET Core use this pattern by default, configuring services at startup and using constructor injection in controllers and other classes.

Architecture snippet – how DI fits:

Imagine a larger system: your Web API controller might depend on NotificationService, which depends on IEmailSender. Thanks to DI, you simply declare those in constructors, and the framework wires everything up. For testing, you can provide a FakeEmailSender (implementing IEmailSender) to test NotificationService without actually sending emails. This modular approach follows the Dependency Inversion Principle (DIP) of SOLID – high-level modules (NotificationService) do not depend on low-level modules (SmtpEmailSender) but on abstractions (IEmailSender).

By leveraging DI, your codebase becomes more flexible and adheres to clean architecture principles. .NET 8’s emphasis on minimal APIs and cloud-native apps relies heavily on DI for configuring cross-cutting concerns (like logging, options, etc.), so mastering DI is essential for modern C# developers. Visual Studio’s IntelliSense and tooling also recognize [DependencyInjection] patterns, making navigation (Go To Implementation) and refactoring easier when you use interfaces and DI.

11. Follow Consistent Naming Conventions (PascalCase, camelCase, etc.)

Consistent naming conventions make your code more readable and maintainable. In C#, the widely-used standard (mirroring Microsoft’s .NET guidelines) is:
A quick reference of typical C# naming conventions for different elements (classes, methods, variables, etc).

  • PascalCase for public members: Classes, properties, methods, and constants (sometimes constants are ALL_CAPS, but many teams stick to PascalCase constants as well). For example, CustomerRepository, CalculateTotal(), OrderId.
  • camelCase for local variables, private fields, and method parameters. For private fields, many codebases use an optional underscore prefix _camelCase to differentiate fields from locals. For example: _itemCount as a private field, and int itemCount as a method parameter.
  • Interfaces typically have an I prefix followed by PascalCase (e.g., IServiceProvider, ILogger).
  • Enums use PascalCase for values (e.g., LogLevel.Error, ConsoleColor.Red).
  • Namespaces and folders are also PascalCase (often matching project or product names).

For instance, a class definition might look like:

namespace MyApp.Services
{
    public class OrderProcessor    // PascalCase class
    {
        private readonly IOrderRepository _orderRepository;  // _camelCase field with underscore
        private readonly string _processorName;              // private field

        public OrderProcessor(IOrderRepository orderRepository, string processorName) // camelCase parameters
        {
            _orderRepository = orderRepository;
            _processorName = processorName;
        }

        public void ProcessOrder(int orderId)  // PascalCase method
        {
            int itemCount = 0;                // camelCase local
            // ...
        }
    }
}

Following these conventions means anyone familiar with C# can read your code without confusion about what’s a type vs. a variable, etc. In the example above, OrderProcessor being PascalCase tells us it’s a type; _orderRepository being prefixed with _ and camelCase signals it’s a private field; the method ProcessOrder is clearly a method (PascalCase); and orderId being camelCase implies it’s a variable/parameter.

Visual Studio 2022, by default, will enforce some of these conventions (for example, it grays out variables that don’t follow typical naming if you have certain analyzers on). Tools like StyleCop.Analyzers or Roslyn’s built-in rules can flag deviations. The community consensus is strong: “Private or scoped variables are camelCase, public ones are PascalCase, and methods are always PascalCase.”. Consistent naming leads to code that “feels” coherent and is easier to navigate using features like IntelliSense, which groups and orders suggestions partly based on naming.

Adhering to naming conventions also helps avoid naming conflicts and merge issues in large teams. If everyone names things uniformly, you won’t have one developer writing get_user_info() method (Python style) and another writing GetUserInfo – which could lead to two differently named methods doing the same thing. Consistency is key in collaborative environments.

In summary, treat the official C# naming conventions as a rule, unless your project has a clearly documented variation. It’s a quick win for code quality.

12. Use async/await for Asynchronous Operations (Don’t Block the Thread)

Asynchronous programming is essential for creating responsive UIs and scalable server applications in .NET. The async/await pattern in C# (introduced in C# 5) makes writing asynchronous code much easier and more readable than older patterns (like callbacks or BeginEnd methods). Key tips for using async/await effectively:

  • Use Task (or ValueTask) return types for async methods (and Task<T> for async methods that return a result). This allows the caller to await the result. Avoid returning void from an async method except in specific scenarios (see Tip #13). For example: public async Task<string> FetchDataAsync(string url) { using HttpClient client = new HttpClient(); HttpResponseMessage response = await client.GetAsync(url); response.EnsureSuccessStatusCode(); string content = await response.Content.ReadAsStringAsync(); return content; } The method above can be awaited by callers, allowing the thread to be freed while the I/O is in progress.
  • Never block on async code (e.g., avoid task.Wait() or task.Result on incomplete tasks). Blocking negates the benefits of async and can cause deadlocks, particularly in GUI or server synchronization contexts. For instance, in a UI app, calling .Result on a task will block the UI thread and possibly deadlock if the task is trying to resume on the UI context. Always use await to get the result asynchronously instead. If you absolutely must convert to sync (for legacy code), consider task.GetAwaiter().GetResult(), but be aware this can still deadlock in certain environments.
  • Use async all the way: Once an operation is asynchronous (say, reading from a file, database, or web API), propagate that asynchrony up through your call stack. This is often phrased as “async all the way down.” It means if function A calls function B which is async, then A should itself be async and awaited by its caller, and so on. This avoids blocking threads and fully leverages .NET’s async capabilities (like freeing the thread to do other work while awaiting I/O). Modern ASP.NET Core and .NET GUI frameworks are async-friendly – for example, ASP.NET Core will not tie up a thread when you use await in a controller action.
  • Be mindful of context capture: By default, await will capture the current synchronization context and attempt to resume on it. In UI apps, this means code after await resumes on the UI thread (which is usually what you want). In library or background code (like ASP.NET Core, which doesn’t have a synchronization context by default), it resumes on a thread pool thread. If you don’t need to capture context (common in library code or server-side code), you might use ConfigureAwait(false) on awaits to avoid any overhead of context capture (though in .NET Core/5/6+ the impact is minimal since there often is no context). For example: string data = await FetchDataAsync(url).ConfigureAwait(false); In ASP.NET Core, you typically do not need ConfigureAwait(false) because there’s no context to resume to (it’s already context-free). But in general library code, it’s a good practice for maximum performance and to avoid deadlock risks in legacy contexts.
  • Prefer Task-based APIs: .NET’s libraries have async versions for most I/O-bound operations (e.g., Stream.ReadAsync, DbContext.SaveChangesAsync, etc.). Use them instead of the synchronous versions to keep your application scalable. In .NET 8, this is more important than ever, as server apps are expected to handle thousands of concurrent requests efficiently by not blocking threads on I/O.

Example – Synchronous vs Asynchronous:

// Synchronous HTTP call (not recommended in modern code, will block a thread)
public string GetSiteContent(string url)
{
    var client = new HttpClient();
    // .Result will block until complete
    return client.GetStringAsync(url).Result;  // ⚠️ Potentially blocks calling thread
}

// Asynchronous version
public async Task<string> GetSiteContentAsync(string url)
{
    var client = new HttpClient();
    return await client.GetStringAsync(url);   // ✔ Releases thread while waiting
}

Using async/await, the second version frees up the thread during the web request, allowing other work (or other requests on the server) to proceed. This is crucial in web servers where a blocked thread means fewer requests can be handled.

Visual Studio 2022 provides great debugging support for async code (like viewing task states) and will even warn you if you call an async method without awaiting it. Embrace async/await for any I/O-bound or long-running operations to keep your app responsive and efficient. Remember, the goal of async is not to make things faster per se, but to enable concurrency and throughput by not wasting threads sitting idle. In modern .NET apps, using synchronous calls where async alternatives exist is almost always considered a bad practice.

13. Use async void Only for Event Handlers

In C#, an async method that doesn’t return a Task (or Task<T>) returns void. These async void methods are special: they are not awaitable and their exceptions propagate differently (they’ll bubble up to the synchronization context unobserved by the caller, often crashing the application if not handled). The only valid use-case for async void methods is event handlers (or other delegate-based scenarios that require a void return). In all other cases, your asynchronous methods should return Task/Task<T>.

Why avoid async void (except in events):

  • No way to await/combine: Callers cannot await an async void method, meaning they have no way to know when it’s completed or to catch exceptions from it.
  • Exception handling: Exceptions thrown in an async void method will go directly to the thread’s synchronization context exception handler (often terminating the process if not handled). With Task-returning methods, exceptions are stored in the returned Task and can be observed (the Task will be faulted). This difference is critical for reliability.
  • Composability: Task-returning methods can be easily composed with Task.WhenAll, Task.WhenAny, LINQ’s Select with async lambdas, etc. async void cannot.

Proper usage – GUI event example:

In a Windows Forms or WPF application, event handlers must match a delegate signature that returns void (e.g., void Button_Click(object sender, EventArgs e)). In such cases, using async void is acceptable:

private async void RefreshButton_Click(object sender, EventArgs e)
{
    try 
    {
        RefreshButton.Enabled = false;
        string data = await FetchDataAsync();
        Display(data);
    }
    catch (Exception ex)
    {
        MessageBox.Show($"Error: {ex.Message}");
    }
    finally 
    {
        RefreshButton.Enabled = true;
    }
}

Here, we need async void because the event signature demands void. We at least wrap the contents in try/catch to handle exceptions (since we can’t await this method from elsewhere, we handle internally). This pattern is acceptable for UI events.

Anti-pattern:

public async void ProcessDataAsync() 
{
    // ...
}

And then calling it like ProcessDataAsync(); from code, expecting it to run asynchronously. If the caller doesn’t await it (they can’t, since it’s void), any exceptions inside could crash the app. If the caller needs to wait for it, they have no mechanism to do so. Instead, make it public async Task ProcessDataAsync() so callers can await it.

In libraries and web apps, you should rarely if ever see async void. In ASP.NET Core, controller actions can be async Task<IActionResult> and the framework will await them. In worker services or middleware, everything is Task-based. Visual Studio will warn (IDE1000-level suggestions) if it sees an async void that isn’t an event handler. As a rule: if you have an async void outside an event, reconsider your design.

14. Never Return null for an Async Task – Use Task.FromResult or Task.CompletedTask

When writing asynchronous methods that return Task or Task<T>, you might encounter scenarios where you want to return a completed result immediately (perhaps because no asynchronous work is actually needed in some branch of logic). In such cases, do not return null for a Task. A null Task will cause an immediate NullReferenceException when awaited by the caller. Instead, return a completed Task using the static helpers:

  • For non-generic Task: return Task.CompletedTask (which gives a pre-completed Task instance).
  • For Task<T>: use Task.FromResult<T>(result) to supply a result (which can be null if T is a reference type and null is a valid result).

Example:

public Task<int> GetNumberAsync(bool getRealData)
{
    if (!getRealData)
    {
        // Return a completed task with a default value
        return Task.FromResult(0);
    }
    // Otherwise, simulate an async operation
    return GetNumberFromDatabaseAsync();
}

If we mistakenly did return null; for the false branch, any await GetNumberAsync(false) in the caller would throw, because you can’t await a null Task. By using Task.FromResult(0), the caller gets a properly completed Task with result 0, and await will work as expected.

For a Task (non-generic) method, e.g.:

public Task SaveDataAsync(Data data, bool actuallySave)
{
    if (!actuallySave)
    {
        return Task.CompletedTask; // nothing to do, return already completed task
    }
    return SaveToDatabaseAsync(data);
}

Task.CompletedTask is effectively the same as Task.FromResult<void>(null) but more succinct and efficient. It indicates a cached completed task (no result).

Rationale: When you mark a method async, the C# compiler will always ensure it returns a Task, even if your method returns immediately. If you omit async and want to manually return a Task (sometimes you do this for methods that might be synchronous in some cases and asynchronous in others), you must create a Task. The Task.FromResult or Task.CompletedTask methods give you that. They are very lightweight – they don’t start a new thread or anything; they just give a Task in a completed state. This avoids the cost of actually running an async state machine when you have nothing asynchronous to do.

Advanced: If you have many such synchronous completion paths and performance matters, consider making the method non-async (no state machine) and returning Task directly as in the examples above. If the logic is primarily asynchronous, just keep it async and return the appropriate completed Task when needed. Also, if you have an interface method that returns Task but you implement it synchronously, you can use Task.FromResult in the implementation.

By following this tip, you avoid a class of bugs that can be surprising (null Tasks causing NREs). As one source puts it: “Returning null from non-async Task-returning methods invites NREs. Instead, ensure all Task-returning methods return a Task; use Task.FromResult(null) in place of null.”. It’s a simple rule: Task-returning methods should never return null.

15. Don’t Force Garbage Collection (GC.Collect() is Rarely Necessary)

.NET’s garbage collector (GC) is very efficient at managing memory. In almost all cases, you should trust the GC to do its job and avoid manually invoking garbage collection via GC.Collect(). Forcing a collection can severely hurt performance and usually doesn’t solve the underlying memory issues (it may even hide them temporarily).

Why avoid GC.Collect():

  • Performance Impact: GC.Collect() triggers a full collection (you can specify generations, but commonly it’s misused to force full collection). This is a blocking operation that suspends all managed threads while it examines objects in memory. If you call it frequently, your application will stall frequently, negating any benefit.
  • Usually Redundant: The GC is optimized to determine the best time to collect. For example, in Gen0 collections are very fast and happen often; Gen2 are expensive and done infrequently. Forcing collections disrupts these heuristics.
  • Heisenbug potential: Forcing collection might make a memory issue (like a leak) appear to go away in testing, but it doesn’t fix the root cause (unreferenced objects should be collected without explicit calls; if memory is growing, you should identify what’s holding references).
  • Server apps throughput: In ASP.NET or services, a forced collection affects all threads and all requests. It’s almost never desirable in a high-throughput server to do so.

Legitimate use-cases: There are few. Possibly after a known large deallocation phase (e.g., you just unloaded a huge data structure and you know a ton of objects are now garbage and you’re at an app quiescent point – some apps might do a collect in this scenario). Or in some gaming scenarios to control pauses (but even then, it’s tricky). The .NET documentation itself notes that the consequences usually outweigh the benefits, except maybe after a one-time operation that allocated a lot of ephemeral objects.

Example – Bad usage:

void LoadBigData()
{
    LargeObject obj = LoadAllData();
    Process(obj);
    obj = null;
    GC.Collect();    // ❌ Don't do this
    GC.WaitForPendingFinalizers();
}

Some might think the above frees memory right after usage. But if this method is called often, you’ve now introduced major pauses every time. The GC would have likely collected LargeObject on its own when needed.

Better approach: Just remove the GC.Collect(). If memory is a concern, consider:

  • Redesigning to stream data instead of loading all at once.
  • Ensuring disposal of large unmanaged resources (Bitmaps, file handles) promptly (which isn’t exactly GC but resource cleanup).
  • Profiling memory to see if there’s a leak or unnecessary retention.

In .NET 8, the garbage collector has only improved (with things like improved large object heap handling, better Gen0/Gen1 performance, and more). It’s designed to handle even high-memory scenarios gracefully. By manually intervening, you often work against the runtime’s optimizations. Thus, as a rule, leave GC.Collect() out of your code unless you have a very specific, proven reason.

16. Avoid the goto Statement – Use Structured Control Flow

The goto statement in C# provides an unconditional jump to another point in code. While it is supported (e.g., for breaking out of deeply nested loops or jumping within a switch), it is almost never needed and often makes code harder to understand and maintain. Using goto is widely regarded as a bad practice in high-level languages because it leads to unstructured control flow.

Problems with goto:

  • Readability: Code with goto jumps is harder to follow. It breaks the structured programming paradigm, which relies on loops, conditionals, and functions for flow control. A goto can make the execution order non-linear and create “spaghetti code.”
  • Maintainability: With goto, introducing new code or modifying logic can have unintended consequences, since the jump targets might bypass initialization or cleanup code. It’s easy to accidentally create bugs or infinite loops.
  • Alternatives exist: C# has rich control flow constructs (if/else, for, while, break, continue, switch, exceptions for error handling, etc.) that cover virtually all scenarios where one might be tempted to use goto. Even breaking out of nested loops can be done by refactoring into functions or using flags/conditions.

Example – replacing goto:

// Using goto (not recommended)
start:
int choice = GetChoice();
if(choice == 0) goto end;
ProcessChoice(choice);
goto start;
end:
Console.WriteLine("Exited loop.");

The above simulates a loop using goto. This can be rewritten with a simple while:

while(true)
{
    int choice = GetChoice();
    if(choice == 0) break;
    ProcessChoice(choice);
}
Console.WriteLine("Exited loop.");

This structured loop is immediately clear in intent (loop until choice is 0). It also ensures any loop-scoped variables are properly handled each iteration. The goto version had manual labels and jumps that are more error-prone (imagine if there was code between ProcessChoice and goto start – it would execute every iteration unexpectedly, whereas in the while loop you’d naturally include it inside or outside the loop as appropriate).

When is goto acceptable? Almost never, but one known case is within a switch when you want to do something like fall through to another case label intentionally (though C# now has better ways like pattern matching, goto case or using multiple labels on one block). Even then, such designs can often be refactored for clarity. Another case is breaking out of multiple nested loops: one could use a labeled break in some languages, but C# doesn’t have labeled breaks, so some resort to goto to break out multiple levels. A cleaner approach is to use a boolean flag or extract the inner loop logic to a function that you can return out of.

Because goto is so rarely needed, its presence in code is often a “code smell” that signals a possible design issue. Many coding standards outright ban it. Visual Studio won’t stop you from using it, but you’ll seldom find it in well-reviewed code on GitHub or in production frameworks. So, prefer structured control flow and leave goto behind.

17. Remove Dead or Commented-Out Code – Rely on Source Control for History

As projects evolve, it’s common to have bits of code that become unused or outdated. You might be tempted to just comment out those sections “just in case.” However, littering your codebase with commented-out code is bad practice. Remove unused code rather than commenting it out. Modern source control (Git, etc.) preserves history, so you can always retrieve old code if needed.

Why remove instead of comment:

  • Clarity: Commented-out code is noise. It can confuse readers (Is this code still relevant? Was it left by accident? Do I need to consider it?). Clean code means having only what’s in use or intended to be in use.
  • Maintenance: If requirements change and that old code needs to be revisited, it’s better to bring it back intentionally from version control, rather than dragging along stale code that might not even compile anymore.
  • Dead code risk: Sometimes, large commented sections might accidentally be left in, and someone might uncomment without full context or it might stay forever without anyone verifying if it’s needed. It’s safer to remove it decisively.
  • Tooling: Tools like analyzers and linters can detect unused code (like private methods never called, etc.). It’s often best to delete such code. If you think it might be needed in the future, it can live in source control history or a future feature branch.
  • Team Understanding: Other developers might not know why something is commented out. A clean deletion (with a good commit message referencing why) is more explicit.

Example:

public void ProcessData(Data input)
{
    // Old approach:
    // for(int i=0; i<input.Items.Count; i++) {
    //    LegacyProcess(input.Items[i]);
    // }
    // We now use the new pipeline:
    foreach(var item in input.Items)
    {
        NewProcess(item);
    }
}

Instead of keeping the old loop in comments, remove it once the new pipeline is confirmed working. If later you realize something from the old approach is needed, you can retrieve that from source history (e.g., using git blame or viewing the file’s history in Azure DevOps/GitHub).

Best Practice: Use descriptive commit messages when removing code (e.g., “Remove legacy processing loop, replaced by NewProcess pipeline”). This makes it easier to find in history. If you’re not entirely sure removal is correct, feature-toggle the new code or use version control branching, rather than leaving commented code in main.

Additionally, if code is truly dead (like a private method never called), remove it – it reduces build times slightly and cognitive load significantly. If an entire feature is deprecated, consider removing related code en masse in one commit.

Remember, your codebase is like a garden – commented-out code are dead weeds that clutter the landscape. Lean on source control as your long-term memory; keep the working codebase clean. This adheres to the principle: “Unused code should be deleted and can be retrieved from source control history if required.”.

18. Implement Equality and Hashing in Tandem (Override Equals and GetHashCode Together)

If you override Equals(object) in a class (to provide value-based equality semantics), you must also override GetHashCode() in a manner consistent with equality. The general contract in .NET is: if a.Equals(b) is true, then a.GetHashCode() must equal b.GetHashCode(). Failing to do so can lead to incorrect behavior when your objects are used in hash-based collections like Dictionary or HashSet.

Guidelines for custom equality:

  • Override both or neither: If you override one, override the other. Otherwise, you might violate the contract above. For instance, consider: class Point { public int X, Y; public override bool Equals(object obj) => obj is Point p && p.X == X && p.Y == Y; // GetHashCode not overridden - inherits from object } Two Point instances with same X and Y will be Equal, but since we didn’t override GetHashCode, each instance could have a different hash code (the default GetHashCode often uses object reference). If you put these in a HashSet, you could end up with duplicate entries because the HashSet thinks they belong in different buckets. The fix is to override GetHashCode() to combine X and Y into a hash (e.g., return HashCode.Combine(X, Y); in .NET Core+).
  • Immutability and Hash Fields: Ideally, use only immutable fields in equality comparisons and hash code calculations. If a field can change after object creation, it shouldn’t be used in GetHashCode() because changing an object’s hash code while it’s in a hash collection breaks the collection. For this reason, you might mark fields used in hashing as readonly. As an example from best practices: “GetHashCode should not reference mutable fields”. If an object’s state can change in a way that affects equality, you typically should not use it as a key in a dictionary unless you manage it carefully.
  • Use helper methods: In .NET, the static System.HashCode.Combine(f1, f2, ...) method (or writing your own simple combination) helps generate a good hash code. Avoid naive or extremely simple hashes (like just XORing or just adding fields) if those could lead to collisions for typical values.
  • Equality operator: If you override Equals in a reference type for value equality, you might also want to overload the == and != operators to align with that (so that a == b gives the same result as a.Equals(b)). Alternatively, some choose not to overload == at all for reference types (keeping reference equality) – see Tip #19 about caution with operator==. If you do overload ==, again, implement both == and != for consistency.

Example:

class Person : IEquatable<Person>
{
    public string SSN;  // Assume SSN uniquely identifies a person
    public string Name;
    public override bool Equals(object obj) => Equals(obj as Person);
    public bool Equals(Person other) => other != null && this.SSN == other.SSN;
    public override int GetHashCode() => SSN?.GetHashCode() ?? 0;
}

In this example, Person defines equality by SSN. We override both Equals and GetHashCode. Now Dictionary<Person, ...> will behave correctly (two Person objects with the same SSN will hash to the same bucket and be considered equal keys). If we omitted GetHashCode, two equal Persons might not be treated as equal keys.

Visual Studio may give you quick fixes to generate equality members (there’s even a snippet to implement IEquatable<T> and both methods). Many IDEs and analyzers warn if you override one without the other.

Also note: With C# 9, records were introduced, which by default implement value equality (all properties count in equality) and proper hashing for you. If using records for data carriers, you get all this for free. But for classes where you need custom equality logic, remember to do it in tandem to keep collections and algorithms working correctly.

19. Use Static Code Analysis and Linters to Enforce Best Practices

Even seasoned developers can slip up on best practices. This is where static code analysis tools come in. Visual Studio 2022 and the Roslyn compiler provide a wealth of analyzers and code style settings that can automatically flag code that doesn’t meet certain guidelines. Taking advantage of these tools helps maintain code quality and consistency across a team.

Built-in Analyzers: Visual Studio includes many analyzers (formerly FxCop analyzers, now part of .NET analyzers) that cover reliability, design, performance, and style. For example:

  • CA (Code Analysis) rules for security and globalization.
  • IDE rules for style (e.g., IDE0059 warns about unnecessary value assignments, IDE0063 suggests using using statement simplifications, etc.).
  • Specific rules like IDE0071 might suggest using pattern matching instead of as + null-check, etc.

These can be configured in EditorConfig files to set team-wide preferences. For instance, you can enforce var usage preferences, naming conventions (you can have the analyzer ensure private fields start with _ or that interfaces start with I, etc.), and more.

Roslyn Analyzers and NuGet packages: You can add analyzers like StyleCop.Analyzers, Roslynator, or domain-specific ones (XUnit analyzers, for example) to catch issues. There’s also SonarAnalyzer for C# which flags code smells (many that overlap with the tips here, such as not using goto, using StringBuilder in loops, etc.). These run during compile and can even be set to break the build if certain rules are violated (treating them as warnings or errors).

Continuous Integration: Integrate analysis into your CI pipeline. Tools like SonarQube or ReSharper Command-Line Tools can generate reports on code quality. This helps catch issues before code is merged.

Code Metrics: Visual Studio can compute code metrics (maintainability index, cyclomatic complexity, etc.). While not a direct analyzer, these metrics can highlight overly complex methods that might need refactoring (for instance, a very high cyclomatic complexity might prompt you to break a function into simpler parts, aligning with tip #21 on single responsibility).

Quick Actions and Refactoring Tools: Visual Studio’s Quick Actions (the lightbulb suggestions) often present fixes that implement best practices:

  • Implement IDisposable pattern correctly.
  • Use string interpolation instead of string.Format (as we discussed in Tip #2).
  • Simplify LINQ expressions or use pattern matching.
  • Remove unused variables/usings (keeping code clean).

By paying attention to these and applying them, your code improves consistently. For example, the IDE might suggest “Use ‘var’ when the type is apparent” if that’s your configured style, or conversely “Use explicit type” if that’s your preference. It might recommend “Add readonly modifier” where applicable, or “Simplify conditional” etc.

Example scenario: After writing some code, you see green squiggles or suggestions like:

  • “IDE0067: Dispose objects before losing scope” – reminding you to dispose or use using for an IDisposable.
  • “IDE0052: Private member X is never used” – perhaps flagging dead code to remove.
  • “CA2007: Consider calling ConfigureAwait on the awaited task” in a library context – reminding you of async best practice in libraries.

By addressing these, you align with best practices automatically.

In summary, don’t code in a vacuum – use the tools at your disposal. C# and VS have a rich ecosystem aimed at helping you enforce the very tips we’re discussing:

“.NET analyzers inspect your C# code for code quality and style issues.”

So turn them on (many are on by default in VS 2022), configure your .editorconfig for team conventions, and treat analyzer warnings as actionable. Over time, you’ll find your codebase stays much cleaner, and you’ll internalize many of the patterns as well.

20. Favor Composition Over Inheritance for Reusability and Flexibility

When designing classes and relationships, it’s often better to compose behavior from multiple classes or interfaces, rather than rely on deep inheritance hierarchies. The mantra “Favor composition over inheritance” means you should consider whether you can achieve code reuse by having one class use another (via properties or method calls) instead of subclassing.

Why favor composition:

  • Reduced Coupling: Inheritance creates a tight parent-child coupling. Subclasses are bound to the behavior of base classes (and often to their quirks). Composition is more flexible – you can swap out components or change relationships at runtime.
  • Greater Flexibility: With composition, you can mix and match behaviors easily (see design patterns like Strategy, where you compose an object with a behavior). Inheritance hierarchies tend to be rigid; composition allows designs like delegation, where an object delegates work to a collaborator object implementing an interface.
  • Avoids Inheritance Pitfalls: Deep inheritance can lead to fragile base class problem (a change in base class can break subclasses unexpectedly), the diamond problem (if multiple inheritance were allowed, which in C# it isn’t for classes), and difficulty understanding the flow (you often have to understand all parents to know what a subclass really does). Composition confines complexity – you look at what the class does with its members.
  • Testing and Maintenance: Composed classes are often easier to unit test because you can stub out or mock their dependencies (if you design to interfaces, which ties into DI as well). With inheritance, you might have to create subclass test doubles or use virtual/abstract methods to override behavior for tests.

Example – Composition vs Inheritance:

Imagine you have classes for different types of notifications: EmailNotification and SmsNotification. Both need to log their actions, and both send messages. One might think to use inheritance:

class Notification  { public void Log() { /* ... */ } }
class EmailNotification : Notification { public void SendEmail(...){...} }
class SmsNotification : Notification   { public void SendSms(...){...} }

This inheritance isn’t really providing reuse except for Log(). A better approach might be:

interface INotifier { void Send(Message m); }

class EmailNotifier : INotifier {
    private readonly ILogger _logger;
    public EmailNotifier(ILogger logger) { _logger = logger; }
    public void Send(Message m) {
        // send email
        _logger.Log($"Email sent to {m.To}");
    }
}

class SmsNotifier : INotifier {
    private readonly ILogger _logger;
    public SmsNotifier(ILogger logger) { _logger = logger; }
    public void Send(Message m) {
        // send SMS
        _logger.Log($"SMS sent to {m.To}");
    }
}

Here we use composition: each notifier has a logger rather than is-a Notification. We can easily unit test by providing a fake ILogger. If we want to add PushNotification later, we implement INotifier without being constrained by a common base class’s design. Also, if logging is optional, we can compose or not compose a logger. This is much more flexible than if logging was in a base class that all must inherit.

When to use inheritance: It’s appropriate when there truly is a clear “is-a” relationship and you want to leverage polymorphism via base classes. C#’s frameworks use inheritance for things like stream types (FileStream, MemoryStream inherit Stream) because they share a contract and some base behavior. But even there, notice they often prefer interfaces (e.g., IEnumerable, IStream interfaces exist). If multiple inheritance of implementation is needed, composition is the only way (since C# doesn’t support multiple base classes). Default interface methods (in C# 8) even allow some base implementation via interfaces now, reducing some need for abstract base classes.

Architecture perspective: Composition aligns with the Single Responsibility Principle (SRP) too. Instead of one class doing multiple things via inheritance, you might have multiple smaller classes each handling one aspect, and then compose them. For instance, a ReportGenerator class might compose a IDataFetcher and an IReportFormatter instead of inheriting from a huge all-in-one base class.

In conclusion, before reaching for inheritance, ask “Can I achieve this by containing an object of that type instead?” More often than not, the answer leads to a cleaner design. Effective use of interfaces and DI (as covered in Tip #10) encourages composition. Many modern design patterns (Decorator, Strategy, etc.) are based on composition. Choose the approach that yields simpler, modular classes.

21. Adhere to the Single Responsibility Principle (SRP)

Each class or module in your application should have one responsibility or reason to change. This is the “S” in SOLID principles and is crucial for writing clean, maintainable code. A class following SRP does one thing (or a closely related set of things) and does it well. If requirements change related to that one thing, the class might need to change; but changes in unrelated functionality shouldn’t affect it.

Why SRP matters:

  • Easier to Understand: A class with one purpose is easier to grasp. You can describe it in a concise sentence. If you find yourself using “and” or “or” while describing a class’s duties, it probably has more than one responsibility.
  • Lower Coupling: SRP inherently reduces coupling because a class is not entangled with many different parts of the system. It interacts with others only in limited, well-defined ways.
  • Simpler Testing: A focused class can be unit tested in isolation without setting up a lot of unrelated context. Fewer branches of logic per class means fewer tests to cover each class’s behavior.
  • Resilience to Change: When a requirement changes (say, UI formatting rules, or business logic for pricing, etc.), ideally only one class (or a small set of classes) should be affected if SRP is observed. If multiple responsibilities are mixed, a single change can ripple through a large class or, worse, accidental changes can occur in one responsibility while modifying another.
  • Reusability: If classes are granular, you might find it easier to reuse them in different contexts. E.g., a class that only handles CSV parsing could be reused anywhere CSV input is needed, rather than having that logic baked into a broader “ReportGeneratorAndExporter” class that can’t be reused elsewhere.

Warning signs of violating SRP:

  • A class has “Manager” or “Helper” in its name and has a wide mix of methods. E.g., GodClassManager that manages database, UI and business logic all together.
  • The class is hundreds or thousands of lines long. Often, length correlates to doing too much (though not always; sometimes a single responsibility can involve a lot of code, but it’s a hint).
  • The class has a lot of dependencies (if you see 5, 6, 10 constructor parameters, it might be handling too many things – consider if it should be split).
  • You find yourself frequently modifying the same class for different reasons (today you change how it logs something, tomorrow change a validation rule, next day change an algorithm – possibly multiple responsibilities).

Example:

Suppose you have an OrderProcessor class that does: validate order, calculate pricing, apply discounts, save to database, and send confirmation email. That’s at least five responsibilities in one. Following SRP, you might refactor into:

  • OrderValidator (validates order details).
  • PricingService (calculates total, applies discounts – perhaps these could even be separate strategies).
  • OrderRepository (handles database save).
  • EmailNotifier (sends emails).

Now OrderProcessor might orchestrate these, but each piece has a single job. Changes to discount logic affect only PricingService. Changing email template affects only EmailNotifier, etc.

Visual Studio won’t automatically enforce SRP (it’s more design principle than code rule), but code metrics (maintainability index, cyclomatic complexity) can indirectly hint at it – huge classes or very complex methods could mean SRP is violated. Design reviews and tools like NDepend can analyze dependencies between classes to spot god classes.

By keeping SRP in mind, you contribute to an architecture where each part can be worked on, tested, and understood in isolation. Modern .NET development, with patterns like microservices, also scales this concept up: services (or microservices) themselves often focus on a narrow business capability (an SRP at the system level). Start with SRP in classes and methods first – e.g., ensure even methods do one thing (if you have a method doing a sequence of distinct tasks, maybe split it into multiple methods each handling one task).

22. Use Properties and Encapsulation Instead of Public Fields

In C#, it’s recommended to use properties (with get/set accessors) to expose class data, rather than public fields. Properties allow you to encapsulate the internal representation of data and add logic when getting or setting values, if needed. Public fields, on the other hand, expose implementation details and cannot be controlled or validated once they’re out in the wild.

Benefits of properties:

  • Encapsulation and Validation: With a property, you have the opportunity to enforce invariants. For example, you can ensure a value is within a range: private int _age; public int Age { get => _age; set { if(value < 0) throw new ArgumentOutOfRangeException(nameof(Age)); _age = value; } } If Age were a public field, any code could set it to a negative number without an immediate error.
  • Future-Proofing: Maybe today a property is a simple wrapper around a field (and you use auto-properties like public string Name { get; set; }). Tomorrow, you might need to log changes to Name, or compute it on the fly, or raise a PropertyChanged event (in MVVM). If you used a public field, changing it to a property later is a breaking change for any consumer (binary incompatibility and different semantics). If you start with a property (even auto-implemented), you can later add logic without affecting the external API.
  • Data Binding: Frameworks (like WPF, UWP, ASP.NET MVC) rely on properties for data binding and serialization. Public properties (especially with {get; set;}) are commonly required for model binding, whereas fields might be ignored/refused by these frameworks.
  • Read-only exposure: You can have a public getter but a private setter, allowing read-only access to consumers while still being able to set internally. With fields, you’d have to make it public or not – no fine-grained control. public string Id { get; } = Guid.NewGuid().ToString(); // publicly readable, privately set only in constructor

Example:

Instead of:

public class Circle {
    public double radius;  // not encapsulated
}

Use:

public class Circle {
    private double _radius;
    public double Radius 
    { 
        get => _radius;
        set 
        {
            _radius = value;
            Area = Math.PI * _radius * _radius; // example of derived property update
        }
    }
    public double Area { get; private set; }
}

Here, we used the setter of Radius to recalc Area whenever radius changes. If radius was a public field, we couldn’t automate that; the user of the class would have to know to also update Area. Encapsulation ensures the object manages its internal consistency.

Auto-properties: C# has made properties almost as lightweight to declare as fields:

public string Name { get; set; }  // auto-property with implicit backing field

This compiles to a private field with get/set accessors. If you later need to add validation to the set, you can change it to a full property. Code outside the class still works the same.

Records and init-only: In modern C#, you can use public string Name { get; init; } for properties that are settable only during object initialization (like in a constructor or object initializer). This gives you the immutability benefits of a readonly field but with the syntax of properties.

Exception (for fields): Public constants (public const) or public static readonly fields are acceptable for things like mathematical constants, configuration keys, etc., because they’re essentially treated like constants by consumers. But for instance data, prefer properties.

Encapsulation is a core principle of OOP – it helps create a clear boundary around the state of an object. By using properties, your class retains control over its data. Visual Studio even has refactorings to convert fields to properties. It’s wise to start with properties from the get-go to avoid refactoring later. In summary, treat public fields as an anomaly; the norm should be properties for exposing data from a class.

23. Utilize Pattern Matching for Cleaner Conditional Logic

C#’s pattern matching features (enhanced in C# 7.0 through C# 10 and beyond) allow you to write clearer and more concise conditional code, especially when dealing with type checks, null checks, or specific value comparisons. Pattern matching can make complex if-else chains or switch statements simpler and less error-prone.

Key pattern matching constructs:

  • is Pattern Expressions: Instead of using as and a null-check or separate is then cast, you can combine checking the type and casting: if (obj is string s) { // Here obj is a string, and we've cast it to s in one go Console.WriteLine($"It's a string of length {s.Length}"); } This is more concise than: var s = obj as string; if(s != null) { ... } Or older: if(obj is string) { string s = (string)obj; ... }
  • switch Expressions with Patterns: The switch statement/expression supports patterns for cases. You can match types, specific values, or even properties via when guards or property patterns: switch(shape) { case Circle c: Console.WriteLine($"Circle with radius {c.Radius}"); break; case Rectangle { Width: var w, Height: var h }: Console.WriteLine($"Rectangle area: {w*h}"); break; case null: Console.WriteLine("Shape is null"); break; default: Console.WriteLine("Unknown shape"); break; } In the Rectangle case, we’re using a property pattern to directly extract Width and Height. This avoids nested ifs to check type then separate code to get properties.
  • Relational Patterns and Logical Patterns: You can do comparisons inside patterns (C# 9+): string DescribeAge(int age) => age switch { < 0 => "Unborn", >= 0 and < 18 => "Minor", >= 18 and < 65 => "Adult", >= 65 => "Senior", _ => "Unknown" }; This switch expression cleanly maps ranges of values to results, much nicer than multiple if-else.
  • not null pattern: Instead of if (x != null), you can do if(x is not null). It’s especially handy in switch cases to handle non-null as a pattern (like case not null:).

Benefits:

  • Clarity: Pattern matching often reduces boilerplate and makes the condition itself express the intent more directly. It improves readability by localizing type or value checks with their corresponding actions.
  • Safety: When you use patterns like if(x is SomeType st), the variable st is strongly typed in that scope – reducing casting errors. The compiler’s flow analysis also recognizes these patterns for definite assignment and null-state (for nullable reference types).
  • Maintainability: Switch expressions with patterns can replace large if-else-if ladders in a more tabular form, which is easier to modify (just add/remove cases) and often the compiler can warn if you forgot to handle a case (like non-exhaustive switch on an enum will produce a warning if not handling all values).

Real-world example:

Without pattern matching:

if(shape is Circle)
{
    var c = (Circle)shape;
    // ...
}
else if(shape is Rectangle)
{
    var r = (Rectangle)shape;
    // ...
}
else if(shape == null)
{
    // ...
}

With pattern matching:

switch(shape)
{
    case Circle c:
        // use c
        break;
    case Rectangle r:
        // use r
        break;
    case null:
        // null handling
        break;
}

It’s succinct and each case is self-contained. It also signals clearly that these are alternative shapes.

Even simple patterns like:

if(sender is Button btn && btn.Content is string text && text.Contains("OK"))
{ /* handle specifically OK button */ }

can combine multiple checks elegantly in one if statement.

C# pattern matching features improve with each version (C# 10 introduced and/or patterns, C# 8 had recursive patterns, etc.). They allow a more declarative style. The Microsoft documentation notes: “Pattern matching provides more concise syntax for testing expressions and taking action when an expression matches.” and that it “improve[s] the readability and correctness of your code.”. So whenever you find yourself writing verbose type-checking or branching logic, consider if pattern matching can simplify it.

24. Use Object and Collection Initializers for Concise, Clear Setup

C# provides object initializers and collection initializers that let you assign values to properties/fields or add items to collections at the time of object creation, all in one expression. Using these initializers can make code more readable by showing the object’s configuration in one place, and it reduces the need for repetitive boilerplate (like calling a lot of add methods or property setters sequentially).

Object Initializers:

Instead of:

var person = new Person();
person.FirstName = "John";
person.LastName = "Doe";
person.Age = 30;

You can write:

var person = new Person 
{
    FirstName = "John",
    LastName = "Doe",
    Age = 30
};

This sets the properties in a block right after construction. Under the hood, it’s equivalent to the sequential calls, but it’s more compact and clear. It’s especially handy if the constructor doesn’t take those parameters, or if you’re initializing an object with many optional settings.

Collection Initializers:

Instead of:

var numbers = new List<int>();
numbers.Add(1);
numbers.Add(2);
numbers.Add(3);

You can do:

var numbers = new List<int> { 1, 2, 3 };

This calls Add for each item internally. Similarly for dictionaries:

var map = new Dictionary<string, int> 
{
    ["one"] = 1,
    ["two"] = 2
};

This uses the indexer to add entries (which calls Add behind the scenes).

For custom collection types, you can also enable collection initializer syntax by implementing Add appropriately.

Benefits:

  • Readability: Initializers show the data of interest without noise. When you look at code with initializers, you see the structure of the data being created. This is akin to a literal. For example, constructing a complex graph of objects is much easier to comprehend when expressed as nested initializers than a long sequence of statements.
  • Immutability helper: You can create and populate an object in one go without needing it to be mutable afterward. For example, you can populate a list within the initializer and then not modify it further (perhaps even make it ReadOnlyCollection).
  • No need for multiple constructors: Instead of providing many overloads of constructors for various combinations, you can have a default constructor and let object initializers set the needed properties. This is often simpler and more expressive.

Example – creating a complex object:

var order = new Order 
{
    Id = 1001,
    Customer = new Customer 
    {
        Name = "Contoso Ltd",
        Address = new Address 
        {
            Street = "1 Microsoft Way",
            City = "Redmond",
            State = "WA",
            ZIP = "98052"
        }
    },
    Items = new List<OrderItem>
    {
        new OrderItem { ProductId = 10, Quantity = 5 },
        new OrderItem { ProductId = 20, Quantity = 2 }
    }
};

This one expression builds an Order with nested Customer and Address, and a list of items. It’s very clear how the data is structured. If written without initializers, it would require multiple intermediate statements, making it harder to see the overall shape.

Under the hood: The compiler translates object initializers into the constructor call followed by property assignments. Collection initializers are translated into calls to Add. So performance-wise, it’s the same as doing those operations explicitly. There’s no magic beyond syntax convenience.

Using initializers also reduces potential errors by keeping related setup logic together. It’s less likely you’ll forget to set a property if you see them all in one initializer.

Modern C# even extended initializers: e.g., you can initialize indexers, and in C# 9, with records, you might use with-expressions (which is a different feature but similarly aiming to succinctly create modified copies of objects).

In conclusion, object and collection initializers make code more fluent and intention-revealing. They are widely used in contemporary C# projects – for instance, configuring options in an ASP.NET Core app often involves initializers. Embrace them to simplify object creation and make your code more declarative.

25. Enable and Embrace Nullable Reference Types (NRT) for Null-Safety

Null reference errors are a notorious source of bugs (the famous “billion-dollar mistake”). C# addressed this in C# 8.0 with nullable reference types feature, which can be enabled in .NET Core 3.0 and above (and is on by default in new .NET 6/7/8 projects). By enabling NRT, the compiler will treat reference types as non-nullable by default and warn you when you might be introducing a null reference bug. This pushes you to explicitly acknowledge when something can be null, leading to safer code.

Key aspects of NRT:

  • Non-nullable by default: When NRT is enabled, a declaration like string name; means name is non-nullable – you cannot assign null to it (the compiler warns if you do) and you’re guaranteed it’s not null when in a non-warning state.
  • Use ? for nullable references: If a reference may be null, you declare it as string? name;. This tells the compiler and readers that name might be null. The compiler then forces you to handle that possibility (e.g., check for null before dereferencing, or use the null-forgiving operator ! if you explicitly want to override a warning).
  • Compiler warnings: The compiler analyzes code flow to issue warnings, e.g., “Possibly null reference” if you call a method on something that could be null, or “Nullability mismatch” if you pass a nullable into a parameter that doesn’t accept null. By addressing these warnings, you greatly reduce runtime NullReferenceExceptions.
  • Nullable attributes: Under the hood, the compiler uses attributes in metadata to indicate where nulls are allowed. This means if you use libraries that also have NRT enabled, you get warnings if you misuse their APIs with nulls.

Example:

#nullable enable  // Usually enabled project-wide in .csproj or by default in new projects

string message = "Hello";
message = null; // ⚠️ Warning: Cannot convert null to 'string' because it is non-nullable

string? optionalMessage = null;
Console.WriteLine(optionalMessage.Length); // ⚠️ Warning: Dereference of a possibly null reference

if(optionalMessage != null)
{
    Console.WriteLine(optionalMessage.Length); // OK after null-check
}

In this example, the compiler protects us:

  • It won’t let us set message to null because it’s not declared nullable.
  • It warns about using optionalMessage without checking for null.

By following these, the chance of a NullReferenceException at runtime goes way down. Essentially, potential null issues are caught at compile time rather than crashing at runtime.

Best Practices with NRT:

  • Enable it in all new projects (the default in .NET 6+ templates is enabled). For older projects, consider enabling it and addressing warnings gradually (you can do it file by file with #nullable enable).
  • Whenever you intend a variable, field, or return value can be null, annotate it as nullable (with ?). This makes the code documentation and behavior explicit.
  • Use helper constructs for handling nulls: The null-coalescing operator (??) is great for fallback values: string displayName = user.Name ?? "Guest"; The null-conditional operator (?.) helps safely traverse objects: int? length = optionalMessage?.Length; This yields null if optionalMessage is null instead of throwing. These make code more concise than explicit if-checks and are clear in intent (especially for property chains).
  • Consider ArgumentNullException.ThrowIfNull(param, nameof(param)) at the start of methods for parameters that shouldn’t be null (though NRT will warn if a caller passes null to a non-null param, it’s still good to validate at runtime in public APIs).
  • Use Debug.Assert(x != null) if you need to convince the compiler something isn’t null due to logic it can’t see (and then you can use the null-forgiving ! to silence warnings in that scope).

Remember, value types (int, bool, etc.) are unaffected by NRT—they’ve always had nullable counterparts (int? etc.). NRT is about reference types. The goal is: make nulls explicit and rare. Codebases that use NRT report significantly fewer null bugs. As Microsoft’s docs say, “Nullable reference types are a feature that minimize the likelihood that your code causes the runtime to throw NullReferenceException.”.

In modern .NET, it’s expected that you use this feature. Most new APIs are annotated for NRT. So embracing it will make consuming those APIs safer too (the compiler will guide you). It’s a prime example of the language helping you enforce a long-standing best practice: be aware of nulls and handle them properly.


By implementing these 25 tips in your C# projects, you’ll write code that is cleaner, safer, and more in line with the expectations of modern .NET development. Many of these practices are supported by features in Visual Studio 2022 and .NET 8+, from analyzers warning you of issues to language enhancements that simplify doing the right thing. Keep this guide as a reference, and over time these will become second-nature in your daily coding. Happy coding in C#!

Sources:

  1. Snoek, Z. Returning null from Task-returning methods in C# – Explains why returning null for a Task is problematic and how the async state machine wraps return values.
  2. Microsoft Docs – .NET Coding Conventions – Official guidelines on style, including var usage and exception handling (catch specific exceptions).
  3. C# Corner – Common Code Smell Mistakes In C# – Demonstrates pitfalls like throw ex vs throw and string concatenation in loops.
  4. Microsoft Docs – Using Statement (C# Reference) – Describes how using ensures IDisposable objects are disposed even in exceptions.
  5. Reddit (/r/dotnet) – Discussion on C# naming conventions – Summarizes common practices (PascalCase vs camelCase).
  6. HackerNoon – Top 25 C# Programming Tips – A collection of best practices and code examples, including avoiding goto and not commenting out code but deleting it.
  7. HackerNoon – A Detailed Guide to String Concatenation in .NET – Recommends string interpolation for cleaner and more memory-efficient string formatting.
  8. Microsoft Docs – Pattern Matching – Describes pattern matching’s benefits for readability and correctness.
  9. Microsoft Docs – Nullable reference types – Explains how this feature helps prevent NullReferenceExceptions.
  10. Microsoft Learn – Dependency injection guidelines – Discusses designing for DI and avoiding global state, aligning with composition and single responsibility design.

Leave a comment