Hosting .NET 8 Applications in Azure Kubernetes Service (AKS)

This article is the third in the series "Use Kubernetes to host .NET 8 applications":
Hosting .NET 8 Applications in Azure Kubernetes Service (AKS) (this article)
The previous article explored scaling .NET applications in a local Kubernetes environment, including concepts like replicas, deployments, and load balancers.
This article explains how to transition your application from a local Kubernetes cluster to Azure Kubernetes Service (AKS).
Setting Up Azure Kubernetes Service (AKS)
Create an AKS cluster
Azure provides multiple ways to create an AKS cluster. You can use the Azure CLI, Azure Portal, or Azure PowerShell. This article uses Azure CLI for simplicity. Alternatively, you can follow the Azure Portal tutorial.
First, ensure you have the Azure CLI installed or use the Azure Cloud Shell. If using Azure CLI locally, log in with:
az login
For streamlined setup, initialize variables for resource names and locations:
$randomId = (New-Guid).ToString().Substring(0,8)
$location = "northeurope"
$resourceGroup = "hello-kube$randomId"
$aksName = "hktest$randomId"
$storageName = "hellokubestorage$randomId"
$acrName = "hellokubeacr$randomId"
The $randomId variable ensures unique resource names.
Next, create a resource group to organize the Azure resources:
az group create --location $location --name $resourceGroup
Create the AKS cluster with the following command:
az aks create --resource-group $resourceGroup `
--name $aksName `
--node-count 1 `
--enable-app-routing `
--generate-ssh-keys
Here:
--node-count 1creates a single-node cluster.--enable-app-routingsimplifies application DNS setup.--generate-ssh-keyscreates SSH keys for secure node access.
Connect kubectl to AKS cluster
To interact with the AKS cluster, use kubectl. If using Azure Cloud Shell, kubectl is pre-installed. For local installations, install it with:
az aks install-cli
To start using kubectl we need to download credentials that were generated during the provisioning our cluster:
az aks get-credentials --resource-group $resourceGroup --name $aksName
Verify the connection by checking the nodes:
kubectl get nodes
You should see the cluster's nodes listed with a Ready status, similar to this
NAME STATUS ROLES AGE VERSION
aks-nodepool1-40348856-vmss000000 Ready <none> 4d2h v1.29.9
Configuring Persistent Storage in AKS
In the previous article, we used hostPath for storage. However, in AKS, storage needs to be available across all nodes. Azure offers several storage options, including Azure Files, Azure Disks, and Azure Blob Storage.
You can find the comparison table below. More information regarding each service can be found at this link.
| Storage | Best for | Access Mode | Performance | Use Case |
| Azure Disks | Block storage for VMs and Kubernetes Pods | ReadWriteOnce | High IOPS, low latency (Premium/Ultra SSD) | Databases, high-performance VMs, Kubernetes |
| Azure Files | Shared storage, accessible by multiple nodes | ReadWriteMany | Standard HDD or Premium SSD | Shared directories, Kubernetes multi-node |
| Azure NetApp Files | High-performance enterprise file storage | ReadWriteMany | High IOPS, scalable, low-latency (NFS/SMB) | HPC, large databases, SAP |
| Azure Blob Storage | Scalable, cloud-native object storage | API-based access | Hot, Cool, Archive tiers based on frequency | Backups, media storage, large datasets |
From the table, Azure Files and Azure Blob Storage stand out as suitable options for our needs. However, supporting Azure Blob Storage requires application-level changes, while Azure Files is already compatible with our requirements. It provides a fully managed file-share service that can be accessed simultaneously by multiple nodes.
Here’s the updated storage.yml configuration file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: hello-pv
labels:
type: azure-file
spec:
storageClassName: azurefile
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: hello-kube-share
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hello-pvc
spec:
storageClassName: azurefile
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
In this configuration:
storageClassName: azurefileprovisions persistent storage using Azure Files, which is preconfigured in AKS.The
azureFilesection specifies the Azure File Share configuration, including the share name (shareName: hello-kube-share) and the Kubernetes secret (secretName: azure-secret) for authentication.
This setup enables high scalability and multi-node support by leveraging Azure's distributed file system.
Creating the Azure Storage Account
Before applying the storage configuration, we need to create an Azure Storage Account to house the file share. Use the following Azure CLI command:
az storage account create `
--name $storageName `
--resource-group $resourceGroup `
--location $location `
--sku Standard_LRS `
--kind StorageV2
Next, create the file share in the storage account to store configuration files. Begin by retrieving the storage account keys:
az storage account keys list `
--resource-group $resourceGroup `
--account-name $storageName `
--query "[0].value" `
--output tsv
Save the output, as you'll need it for the next step. Then, create the file share replacing the first <your-storage-account-key> :
az storage share create `
--name hello-kube-share `
--account-name $storageName `
--account-key <your-storage-account-key>
To securely store the Azure Storage Account keys the Kubernetes secret should be created.
Replace <your-storage-account-key> with the saved key and run this command
kubectl create secret generic azure-secret `
--from-literal=azurestorageaccountname=$storageName `
--from-literal=azurestorageaccountkey=<your-storage-account-key>
This command creates a secret named azure-secret, containing:
The storage account name (
azurestorageaccountname).The storage account key (
azurestorageaccountkey).
Finally, apply the storage configuration:
kubectl apply -f storage.yml
To verify successful creation, run:
kubectl get pv
kubectl get pvc
Azure Container Registry
Before modifying our application configurations, we need a repository to store our container images. While DockerHub is a popular choice, Azure Container Registry (ACR) offers distinct advantages:
Seamless integration with Azure Kubernetes Service (AKS), enabling straightforward authentication and access via Azure Active Directory (AAD) and managed identities.
Optimized for Azure environments, reducing latency when pulling images by hosting the registry in the same region as your AKS cluster.
Enterprise-grade security features, including private repositories, vulnerability scanning, Azure Private Link, and role-based access control (RBAC) via AAD.
Advanced capabilities tailored for enterprise use, such as geo-replication, image signing, and full integration with the Azure ecosystem.
Let’s create an ACR instance named hellokubeacr{random} in the same resource group as our AKS cluster. In a production environment, consider placing the ACR in a separate resource group to enable reuse across multiple projects. This approach depends on your specific requirements.
az acr create `
--resource-group $resourceGroup `
--name $acrName `
--sku Basic `
--admin-enabled True
In this article, we will authenticate with ACR using a login and password, similar to DockerHub. However, multiple authentication methods are available, so I recommend reviewing the documentation to choose the best option for your needs.
Log in to Azure Container Registry using the following command:
az acr login --name $acrName
If successful, you will see the message Login Succeeded.
To allow AKS pull images from ACR, we need to create a Kubernetes secret. First, retrieve the ACR credentials:
az acr credential show --name $acrName --resource-group $resourceGroup
The output will include the username and passwords for your Azure Container Registry. Save this information for use in the next step.
Create the Kubernetes secret for image pulling:
kubectl create secret docker-registry acr-secret `
--docker-server="$acrName.azurecr.io" `
--docker-username=<your-acr-username> `
--docker-password=<your-acr-password> `
--docker-email=<your-email>
At this stage, we have everything we need to push our application images to the cloud.
Preparing Container Images
Deploying the background process
Since we already built the application images in the previous article, we will reuse them here. To do this, tag the images using the format {container-registry}/{image-name}:{version}.
container-registry: Links the image to the corresponding Azure Container Registry, allowing Kubernetes to locate it during deployment.image-name:version: Identifies the application and its version, aiding in version management within the registry.
For instance, tagging the hello-kube image version 1.2 for the hellokubeacr{random} registry looks like this:
docker tag hello-kube:1.2 "$acrName.azurecr.io/hello-kube:1.2"
Push the image to ACR
docker push "$acrName.azurecr.io/hello-kube:1.2"
Now that the image is stored in ACR and the Kubernetes secret is configured, update your pod’s configuration to pull the image from ACR.
Here is your updated pod configuration file:
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: hello-pvc
containers:
- name: hello-kube
image: hellokubeacr{random}.azurecr.io/hello-kube:1.2 # Replace {random} with real value
volumeMounts:
- name: data
mountPath: /configuration
imagePullSecrets:
- name: acr-secret
The key differences are:
imageuses the fully qualified ACR path (hellokubeacr{random}.azurecr.io/hello-kube:1.2) instead of just the image name.imagePullSecretsrefers to theacr-secret, enabling Kubernetes to authenticate with ACR.
Apply the configuration:
kubectl apply -f app-pod.yml
Deploying the API
Deploying the API to AKS follows a similar process. Start with tagging the Docker image and pulling it to ACR.
# Tag your API image for ACR
docker tag hello-kube-api:1.0 "$acrName.azurecr.io/hello-kube-api:1.0"
# Push the image to ACR
docker push "$acrName.azurecr.io/hello-kube-api:1.0"
The image is available in ACR, update the deployment YAML file to point to the ACR image and configure the appropriate secrets for image pulling.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kube-api-deploy
spec:
replicas: 2
selector:
matchLabels:
app: hello-kube-api
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-kube-api
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: hello-pvc
containers:
- name: hello-kube-api
image: hellokubeacr{random}.azurecr.io/hello-kube-api:1.0 # Replace {random} with real value
ports:
- containerPort: 8080
volumeMounts:
- name: data
mountPath: /configuration
imagePullSecrets:
- name: acr-secret
Apply the deployment:
kubectl apply -f api-deploy.yml
Setting Up a Load Balancer
To make the API accessible from outside the Kubernetes cluster, we need to expose it via a LoadBalancer. By default, Kubernetes services are internal, meaning they are only accessible within the cluster network. A LoadBalancer allows us to create an external IP address that routes traffic to the appropriate pods inside the cluster. This is crucial for production environments where users or external systems need to interact with the application.
In Azure, the LoadBalancer service type automatically provisions an external IP address through Azure's native load-balancing infrastructure. This simplifies configuration and ensures that your API is publicly reachable with minimal effort.
Here’s the configuration for the LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: hello-kube-lb-svc
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: hello-kube-api
Apply the service:
kubectl apply -f api-lb.yml
Testing the Deployment
With everything in place, retrieve the external IP address of the LoadBalancer service:
kubectl get svc
Look for the hello-kube-lb-svc entry, which will display the external IP address.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kube-lb-svc LoadBalancer 10.0.126.94 72.145.45.65 8080:32551/TCP 61s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 8m43s
Access the API’s health endpoint /api/health to confirm deployment success.

The health endpoint returns the 200 OK with the name of the pod.
The request to /api/state the endpoint gives an increasing number each time.

Summary
In this article, we transitioned a .NET 8 application from a local Kubernetes environment to Azure Kubernetes Service (AKS). We set up an AKS cluster, configured Azure Files for persistent storage, integrated Azure Container Registry (ACR), and adjusted configurations for cloud deployment. Finally, we deployed and tested the application, showcasing the power and scalability of Azure for hosting .NET applications.
Links
Deploy an Azure Kubernetes Service (AKS) cluster using Azure portal | Microsoft Learn
How to install the Azure CLI | Microsoft Learn
What is Azure Cloud Shell? | Microsoft Learn
az aks create | Microsoft Learn
Concepts - Storage in Azure Kubernetes Services (AKS) - Azure Kubernetes Service | Microsoft Learn
Image credits: Container Ship on Sea by Marlin Clark



