Engineering Excellence: High-Performance, Cloud-Native APIs with ASP.NET Core 8 and Azure

In the modern era of software development, building an API is no longer just about exposing data over HTTP. A truly “cloud-native” API is an engineered system designed from the ground up to thrive in a distributed, elastic, and often unpredictable cloud environment. It must be more than functional; it must be exceptionally performant, resilient to failure, deeply observable, verifiably secure, and built to scale on demand.

This guide is an engineering deep dive for.NET developers aiming to build such APIs. We will leverage the cutting-edge features of ASP.NET Core 8 to construct a high-performance API that embodies the principles of cloud-native design. Our journey will cover performance optimization with Minimal APIs and high-speed data access patterns, building resilience with Polly, achieving total system observability with OpenTelemetry, and finally, securing and deploying our application to Azure using best practices like Azure Key Vault and containerization. This is not just a tutorial on writing endpoints; it’s a blueprint for engineering excellence in the cloud.

Section 1: Peak Performance with.NET 8 Minimal APIs

Performance is not an afterthought; it’s a core architectural concern. ASP.NET Core has always been fast, but.NET 8’s Minimal APIs provide the tools to build some of the highest-performance HTTP APIs in the world.

Why Minimal APIs are Fast

Minimal APIs offer a streamlined hosting model that bypasses the overhead of the traditional MVC (Model-View-Controller) pipeline. By removing layers of abstraction like controllers, action filters, and complex model binding, the request processing path is significantly shorter, resulting in lower latency and higher throughput. This makes them an ideal choice for performance-critical microservices and cloud-native applications.  

Leveraging IResult and TypedResults

While you can return raw data directly from a Minimal API endpoint, the most powerful and performant approach is to use types that implement the IResult interface. In.NET 8, the TypedResults static class is the preferred way to create these responses.

Consider two ways to return a successful response with a Product object:

C#

// Using Results (less optimal)
app.MapGet("/products/{id}", (int id) => {
    var product = GetProduct(id);
    return product is not null? Results.Ok(product) : Results.NotFound();
});

// Using TypedResults (preferred)
app.MapGet("/products/{id}", (int id) => {
    var product = GetProduct(id);
    return product is not null 
       ? TypedResults.Ok(product) 
        : TypedResults.NotFound();
})
.Produces<Product>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound);

Using TypedResults offers two key advantages :  

  1. Compile-Time Type Safety: It returns a strongly-typed result (e.g., Ok<Product>), eliminating the need for casting in unit tests and preventing runtime errors.
  2. Automatic OpenAPI Metadata: The framework can infer the response types and status codes directly from the method signature, automatically generating accurate OpenAPI (Swagger) documentation. This improves both performance (no reflection needed at runtime) and the developer experience.

The Power of [AsParameters]

Minimal API endpoints can sometimes have long parameter lists, including route parameters, query strings, and injected services. The [AsParameters] attribute, introduced in.NET 7 and enhanced in.NET 8, allows you to group these parameters into a single class or struct, cleaning up your code without introducing performance penalties.  

C#

// Before: Cluttered signature
app.MapGet("/products", (
    int? page, 
    int? pageSize, 
    string? sortBy, 
    IProductRepository repository) => {... });

// After: Clean signature with [AsParameters]
app.MapGet("/products", ([AsParameters] ProductQuery query, IProductRepository repository) => {... });

public class ProductQuery
{
    public int? Page { get; set; }
    public int? PageSize { get; set; }
    public string? SortBy { get; set; }
}

The framework efficiently maps the request data to the properties of the ProductQuery object, maintaining high performance while significantly improving code readability and maintainability.

Ahead-of-Time (AOT) Compilation

For the ultimate in start-up performance and minimal memory footprint critical for serverless and containerized environments .NET 8 offers improved support for Native AOT. Minimal APIs are designed to be AOT-friendly, allowing you to compile your API directly to native code, eliminating the JIT (Just-In-Time) compiler and reducing the application’s size.

Section 2: High-Speed Data Access: EF Core vs. Dapper

The performance of an API is often dictated by the speed of its data access layer. In the.NET world, the two leading choices are Entity Framework Core and Dapper.

  • Entity Framework Core (EF Core): A full-featured Object-Relational Mapper (ORM) from Microsoft. It excels at developer productivity, providing powerful features like LINQ for querying, change tracking for simplified updates, and a robust migration system.  
  • Dapper: A micro-ORM known for its raw speed. It is essentially a set of extension methods on IDbConnection that provides a lightweight way to execute raw SQL queries and map the results to C# objects.  

For performance-critical queries, especially read-heavy operations that are common in APIs, Dapper is consistently faster than EF Core. Its minimal abstraction layer means it gets very close to the performance of raw ADO.NET.  

However, the choice is not binary. The most effective strategy for a high-performance API is often a hybrid approach:

  • Use EF Core for Write Operations: For commands (Create, Update, Delete) and complex business transactions, the productivity benefits of EF Core’s change tracking and unit of work patterns are invaluable.
  • Use Dapper for Read Operations: For high-frequency queries that back your API’s GET endpoints, use Dapper with optimized, handwritten SQL to achieve the best possible performance.  

This pragmatic approach allows you to leverage the strengths of both tools, building an API that is both productive to develop and blazingly fast in production.

Section 3: Caching Strategies for Ultimate Scalability

Even with a highly optimized data access layer, repeatedly fetching the same data from a database is inefficient. Caching is a fundamental technique for improving API performance and scalability by storing frequently accessed data in a faster, temporary storage location.

In-Memory Caching

For single-instance applications, ASP.NET Core provides a simple in-memory cache via the IMemoryCache interface. It’s easy to set up and use.  

C#

// In Program.cs
builder.Services.AddMemoryCache();

// In an endpoint
app.MapGet("/products/{id}", async (int id, IMemoryCache cache, IProductRepository repo) =>
{
    if (cache.TryGetValue($"product-{id}", out Product? product))
    {
        return TypedResults.Ok(product);
    }

    product = await repo.GetByIdAsync(id);
    if (product is null) return TypedResults.NotFound();

    var cacheEntryOptions = new MemoryCacheEntryOptions()
       .SetSlidingExpiration(TimeSpan.FromMinutes(5));
    
    cache.Set($"product-{id}", product, cacheEntryOptions);
    
    return TypedResults.Ok(product);
});

The major limitation of in-memory caching is that it is local to each server instance. In a cloud-native environment where your API is scaled out to multiple instances, each instance would have its own, potentially inconsistent, cache.  

Distributed Caching with Redis

For scaled-out, cloud-native applications, a distributed cache is essential. A distributed cache is an external service shared by all instances of your API. Redis is the industry standard for high-performance distributed caching.

ASP.NET Core provides the IDistributedCache interface, with a Redis implementation available via a NuGet package.  

  1. Add the Package: dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis
  2. Configure in Program.cs: builder.Services.AddStackExchangeRedisCache(options => { options.Configuration = builder.Configuration.GetConnectionString("Redis"); });
  3. Use in an Endpoint: The usage is similar to IMemoryCache, but data must be serialized (typically to a byte array or JSON string) before being stored.

Using a distributed cache like Redis ensures that all instances of your API share a consistent cache, dramatically reducing database load and improving response times across your entire scaled-out application.

Section 4: Building a Resilient API with Polly

In a distributed cloud environment, failures are not an exception; they are a certainty. Networks are unreliable, and downstream services can fail or become slow. A cloud-native API must be designed to handle these transient failures gracefully. This is where resilience patterns come into play, and in.NET, the de facto library for implementing them is Polly.  

The modern approach to using Polly for HTTP clients is to integrate it with IHttpClientFactory, which manages the lifecycle of HttpClient instances and allows for the centralized configuration of resilience policies. This approach reflects a fundamental shift in cloud-native thinking: resilience is not merely error handling; it is a core, non-negotiable architectural component that defines how your system behaves under stress.  

Implementing the Retry Pattern

The most common transient failure is a temporary network glitch. Instead of failing immediately, the application should retry the operation.

C#

// In Program.cs
builder.Services.AddHttpClient<IInventoryClient, InventoryClient>()
   .AddTransientHttpErrorPolicy(policyBuilder => 
        policyBuilder.WaitAndRetryAsync(3, retryAttempt => 
            TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))));

This code configures a typed HttpClient for an InventoryClient. The AddTransientHttpErrorPolicy extension method adds a Polly policy that will automatically retry any request that results in a transient HTTP error (like a 5xx status code or a HttpRequestException). It will retry up to 3 times with an exponential backoff delay, which prevents overwhelming a struggling service with rapid-fire retries.  

Implementing the Circuit Breaker Pattern

If a downstream service is experiencing a major outage, continuing to send requests to it is pointless. It wastes resources and can prolong the outage. The Circuit Breaker pattern solves this by “tripping” a circuit after a certain number of consecutive failures.  

C#

// In Program.cs
builder.Services.AddHttpClient<IInventoryClient, InventoryClient>()
   .AddTransientHttpErrorPolicy(policyBuilder => 
        policyBuilder.CircuitBreakerAsync(5, TimeSpan.FromSeconds(30)));

This policy will “break” the circuit after 5 consecutive failures. For the next 30 seconds, any calls made through this client will fail immediately without hitting the network. After 30 seconds, the circuit moves to a “half-open” state, allowing one test request through. If it succeeds, the circuit closes; if it fails, it opens again.

Implementing the Fallback Pattern

When all retries fail or the circuit is open, you can provide a graceful fallback response instead of throwing an exception. This could be returning stale data from a cache or a default value.  

C#

// In Program.cs
var retryPolicy = HttpPolicyExtensions
   .HandleTransientHttpError()
   .WaitAndRetryAsync(3, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));

var fallbackPolicy = Policy<HttpResponseMessage>
   .Handle<Exception>()
   .FallbackAsync(new HttpResponseMessage(HttpStatusCode.OK)
    {
        Content = new StringContent("{\"stock\": 0}") // Return default stock
    });

var policyWrap = Policy.WrapAsync(fallbackPolicy, retryPolicy);

builder.Services.AddHttpClient<IInventoryClient, InventoryClient>()
   .AddPolicyHandler(policyWrap);

Here, we wrap the retry policy with a fallback policy. If the retries are exhausted, the fallback policy will execute, returning a default response and preventing the API from failing completely.

Section 5: The Gateway to Your Services: Implementing YARP

In a microservices architecture, exposing every service directly to the internet is a security risk and a management nightmare for client applications. An API Gateway is a server that acts as a single entry point for all client requests. It routes incoming requests to the appropriate backend service and can handle cross-cutting concerns like authentication, rate limiting, and load balancing.  

YARP (Yet Another Reverse Proxy) is a lightweight, high-performance, and highly customizable reverse proxy library from Microsoft, built on.NET. It’s the perfect tool for creating an API Gateway for your.NET microservices.  

Step-by-Step Configuration

  1. Create the Gateway Project: dotnet new web -n ApiGateway
  2. Add the YARP Package: dotnet add package Yarp.ReverseProxy  
  3. Configure YARP in Program.cs:
    // ApiGateway/Program.cs
    var builder = WebApplication.CreateBuilder(args); builder.Services.AddReverseProxy() .LoadFromConfig(builder.Configuration.GetSection("ReverseProxy"));

    var app = builder.Build(); app.MapReverseProxy();
    app.Run();


    This minimal setup loads the proxy configuration from appsettings.json and maps the reverse proxy middleware to handle incoming requests.  
  4. Define Routes and Clusters in appsettings.json:
    { "ReverseProxy": {
    "Routes": {
    "products-route": {
    "ClusterId": "products-cluster",
    "Match": {
    "Path": "/api/products/{**catch-all}"
    }
    }
    },
    "Clusters": {
    "products-cluster": {
    "Destinations": {
    "destination1": {
    "Address": "https://localhost:7001" // URL of your backend API
    }
    }
    }
    }
    }
    }

    This configuration defines a route that matches requests to /api/products/... and forwards them to the cluster named products-cluster, which contains the destination address of our high-performance API.  

Section 6: Total Observability: Logging, Metrics, and Tracing

In a distributed system, you can’t attach a debugger to step through a request in production. Observability the ability to understand the internal state of your system from its external outputs is the essential tool for debugging in the cloud. It rests on three pillars: logs, metrics, and traces. The power of modern observability comes not from these individual signals, but from their  

correlation. When a log message contains the same TraceId as a metric and a distributed trace, you can reconstruct the entire lifecycle of a single problematic request, effectively creating a “distributed debugger.”

OpenTelemetry (OTel) is the open-source, vendor-neutral standard for instrumenting, generating, and exporting this telemetry data.  

Pillar 1: Structured Logging with Serilog

We’ve already introduced Serilog. The key to making logs useful for observability is ensuring they are structured and enriched with context. We can use the Serilog.Sinks.OpenTelemetry package to send our structured logs to an OTel collector, which automatically correlates them with traces.  

Pillar 2 & 3: Metrics and Traces with OpenTelemetry

.NET 8 has first-class support for OpenTelemetry. We can configure it in Program.cs to automatically instrument our application.

  1. Add OTel Packages:
    dotnet add package OpenTelemetry.Extensions.Hosting
    dotnet add package OpenTelemetry.Instrumentation.AspNetCore
    dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
    dotnet add package OpenTelemetry.Exporter.Prometheus.AspNetCore
  2. Configure OTel in Program.cs:
    using OpenTelemetry.Metrics;
    using OpenTelemetry.Resources;
    using OpenTelemetry.Trace;

    var builder = WebApplication.CreateBuilder(args);
    const string serviceName = "HighPerformanceApi"; builder.Services.AddOpenTelemetry()
    .ConfigureResource(resource => resource.AddService(serviceName)) .WithTracing(tracing => tracing .AddAspNetCoreInstrumentation() .AddHttpClientInstrumentation() .AddOtlpExporter())
    // Exports to Jaeger/Zipkin via OTLP
    .WithMetrics(metrics => metrics .AddAspNetCoreInstrumentation() .AddHttpClientInstrumentation().AddPrometheusExporter());
    // Exposes a /metrics endpoint
    var app = builder.Build();
    app.MapPrometheusScrapingEndpoint();
    // Expose the /metrics endpoint
    //... rest of the app

This configuration achieves the following:

  • Tracing: It automatically creates traces for all incoming ASP.NET Core requests and outgoing HttpClient calls. Using AddOtlpExporter, these traces can be sent to an OTel collector and visualized in tools like Jaeger or Zipkin.  
  • Metrics: It collects a rich set of default metrics from ASP.NET Core (e.g., request count, duration). AddPrometheusExporter exposes these on a /metrics endpoint, which can be scraped by a Prometheus server for monitoring and alerting.  

Section 7: Securing and Deploying to Azure

The final step is to prepare our API for a secure production deployment on Azure.

Secure Configuration with Azure Key Vault

Storing secrets like connection strings and API keys directly in appsettings.json is a major security risk. Azure Key Vault provides a secure, centralized store for application secrets.

We can configure our ASP.NET Core application to read configuration directly from Key Vault at startup, using Managed Identity for passwordless authentication when running in Azure.  

  1. Add Packages:
    dotnet add package Azure.Extensions.AspNetCore.Configuration.Secrets
    dotnet add package Azure.Identity
  2. Configure in Program.cs:
    // In Program.cs, during host configuration builder.Configuration.AddAzureKeyVault(
    new Uri(builder.Configuration["KeyVault:Uri"]), new DefaultAzureCredential());

When deployed to Azure App Service with Managed Identity enabled, DefaultAzureCredential will automatically authenticate with Key Vault and load the secrets, making them available through the standard IConfiguration interface.

Deployment to Azure

With our API containerized, deploying to Azure is highly flexible.

  • Azure App Service for Containers: A simple and powerful platform for running web apps in containers. You can deploy your API’s Docker image directly from a container registry like Azure Container Registry (ACR).
  • Azure Kubernetes Service (AKS): For more complex, large-scale microservice deployments, AKS provides a fully managed Kubernetes service. You can deploy your API, gateway, and other services using Kubernetes manifests or Helm charts.  

Conclusion

Building a high-performance, cloud-native API in.NET 8 is an exercise in modern software engineering. It requires a holistic approach that considers performance from the first line of code, bakes in resilience to handle the inherent instability of the cloud, and instruments the application for deep observability. By combining the raw speed of Minimal APIs, the flexibility of a hybrid data access strategy, the robustness of Polly’s resilience patterns, and the comprehensive insights from OpenTelemetry, developers can build APIs that are not just functional, but truly engineered for the demands of the modern cloud. These practices are the new standard for excellence in the.NET ecosystem.

Leave a comment