How to use Azure Container Registry in Kubernetes
backend
Written by Mehrdad Zandi
What is Azure Container Registry (ACR)
Azure Container Registry (ACR) is a private registry service for building, storing, and managing container images and related artifacts like as Dockerhub. ACR is used specially by Enterprises which don’t want to have their containers on a public share.
ACR comes in three pricing tiers: Basic, Standard, and Premium. The main difference between them is the amount of included storage and the number of webhooks you can use. Additionally, the premium tier supports geo-replication. This means that your images are still available when a data center goes down.
In this post, I will describe how to create ACR, how to upload images to ACR, and how to pull images from ACR by configuring your Microservice and Azure Kubernetes cluster to run them in your Kubernetes cluster.
Prerequisites
- A GitHub account. Create a free GitHub account, if you don’t already have one.
- An Azure DevOps organization and a project. Create a new organization and/or a new project, if you don’t already have one.
- An Azure account. Sign up for a free Azure account, if you don’t already have one.
Create Azure Container Registry in the Azure Portal
I have described in my post Azure Container Registry (ACR) , how to create and how to push image to ACR. You can create ACR by looking to this post.
Here I have created ACR with name: mehzanContainerRegistry with Resource Group: mehzanRSG, in Region: Sweden Central.
Pushing Image to ACR
You have two alternative to push image to ACR:
- Alternative 1: importing the image from Dockerhub.
- Alternative 2: uploading it from a CI/CD pipeline.
I am going to describe the both alternatives in this post
Alternative 1: Import Image from Dockerhub into ACR
To import the image, use the following Azure CLI commands: (run az login first and then az acr import ..)
az login
az acr import --name mehzancontainerregistry --source docker.io/mehzan07/productmicroservice:latest --image productmicroservice:latest
The first line logs you into your Azure subscription and the second one takes the name of your ACR (name should be lowercase), the source image from Dockerhub, and the image name which will be created in ACR .
In here I have taken the productmicroservice image from my Dockerhub: docker.io/mehzan07. (Your image should be exist in the your Dockerhub, otherwise you can push it from your local).
If importing is succeeded then navigate to your ARC registry in the Azure portal and select Repositories, then you can see productmicroservice (as shown in the following figure). If you double click it shows tag: latest.

Create Azure Kubernetes Service (AKS)
You have to create an Azure Kubernetes Service (AKS), To Create AKS look to my post: Azure Kubernetes Service (AKS)
I have created AKS with name: microservice-aks with Resource group: mehzanRSG in Region: Sweden Central ( in the same Resource group and and same Region for ACR which I have created in this post).
Note: I haven’t create any Services and Progress yet, I shall create this later on.
Pulling Image from Azure Container Register to Kubernetes (aks)
To pull this image from kubernetes we should authorize Kubernetes to pull images from ACR.
Kubernetes is not authorized to pull the image. This is because ACR is a private container registry and you have to give Kubernetes permission to pull the images.
Allowing Kubernetes to pull ACR Images
To allow Kubernetes to pull images from ACR, you first have to create a service principal and give it the acrpull role.
Service principals define who can access the application, and what resources the application can access. A service principal is created in each tenant where the application is used, and references the globally unique application object. The tenant secures the service principal sign-in and access to resources. For more about Service Principal.
Use the following bash script to create the service principal and assign it the acrpull role.
1- Run the following script on Azure CLi command (start from portal):
ACR_NAME=mehzancontainerregistry
SERVICE_PRINCIPAL_NAME=acr-service-principal
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role acrpull --query "password" --output tsv)
USER_NAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_NAME --query "[].appId" --output tsv)
echo "Service principal ID: $USER_NAME"
echo "Service principal password: $PASSWORD"
Where the mehzancontainerregisry is the ACR name which we have created in the beginning of this post.
Note: If running of copy of this script is not worked then you should first copy and paste it to notepad and then take from there to run in the Azure CLI.
If it is succeed then output will be like as following:
Service principal ID: f8922aa7-81df-48a5-a8c6-5b158edb6072
Service principal password: oDm8Q~eVBH-15mE25t3EIqyTt0pc87UWmhVURaIM
Save these Service principal ID and Service principal password: and use them in the next step (in the following).
2- Next, create an image pull secret with the following command in the command line:
kubectl create secret docker-registry acr-secret --namespace default --docker-server=mehzancontainerregistry.azurecr.io --docker-username=f8922aa7-81df-48a5-a8c6-5b158edb6072 --docker-password=oDm8Q~eVBH-15mE25t3EIqyTt0pc87UWmhVURaIM
where docker- server value (mehzancontainerregistry.azurecr.io)
is ACR Service and it is created with creating of ACR in the beginning. namespace: as default, docker-username: the Service principal ID value and docker password f Service principal password value from the step 1.
If it succeed the output will be: “secret/acr-secret created “
namespace value can be found from Ks8: dashboard as shown in the following figure:

3- Use the Image Pull Secret in your Microservice (productmicroservice)
After the image pull secret was created, you have to tell your microservice to use it. I have used the values.yaml file (under charts:productmicroservice) for values, as following code:
imagePullSecrets:
- name: acr-secret
I have build image and pushed to my dockerhub before importing of image. The Source code can be found in my Github.
If you used a different name for your secret in the kubectl create secret docker-registry, then you have to use your name instead of acr-secret.
4- Copy the following yaml in somewhere in your local machine and give a file name (like as : acr-secret.yml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: productmicroservice
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: productmicroservice
template:
metadata:
labels:
app: productmicroservice
spec:
containers:
- name: mehzancontainerregistry
image: mehzancontainerregistry.azurecr.io/productmicroservice:latest
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: acr-secret
And run it in the following kubectl
command as follow:
kubectl apply -f path\filename.yml
I have saved the script in acr-secret.yml in the path C:\Utvecklingprogram\Kubernetes\
And run it as following command line from my local machine
kubectl apply -f C:\Utvecklingprogram\Kubernetes\acr-secret.yml
If it succeed the output shall be: pod/productmicroservice created
where name: productmicroservice is the pod name of productmicroservice in the kubernetes (shall be seen in KS8: dashboard) and containers: name: mehzancontainerregistry
is the ContainerRegistry name we have created in Azure (the name should be lower case). The image: mehzancontainerregistry.azurecr.io/productmicroservice:latest
is the image in the Container Registry Service from Azure which we have imported from my Dockerhub.
Test the pulling image from Kubernetes (aks).
Connect to the K8s dashboard (by starting Octant from your local machine) If you have not installed Octant look to the my post: azure-kubernetes-service-aks
Start Octant from your local machine(start menu) and click on the Workloads: Pods, the following shall be displayed:

Look to the Config and Storage: Secret:

As you see the acr-secret is created in your KS8 service.
Look to the Events:

If we check the Services on the KS8: dashboard then we see this part is empty, because we have created only a kubernets (microservice-aks) without any “Service and Ingress”. Now we want to have Service and test application (productmicroservice). We can create Service And Ingress as described in my post: Azure Kubernetes Service (AKS) In the section: deploy the first Application.
Now go to the KS8 dashboard : Discovery and Load Balancing: Services then you can see that productmicroservice is created as following image:

Open browser with the external IP : 4.225.208.21, then the Swagger UI is displayed for ProductMicroservice as following:

That is all.
We have created ACR, imported image from dockerhub, created AKS Service and configured, authorized to pull from ACR and tested it with KS8:dashboard.
In the end Cleanup all resources
When you are finished, delete all created resources in your Azure account. AKS creates three additional resource groups. Make sure to delete them too. Delete all resource groups.
Conclusion
In this post I have created an Azure Container Registry (ACR) in Azure Portal and Imported image from Dockerhub to ACR. Then I have created an Azure Kubernetes Service (AKS) and Authorized it to access to pull image from ACR. In the end I have tested it in KS8 dashboard (Helm octant), which shows the Swagger UI is uploaded from my application (ProductMicroservices).
All the Source code is on my GitHub.
Read about Alternative 2: Uploading Image to ACR from Azure DevOps Pipelines here.
About the author
Mehrdad is Consultant as System developer and Architect in Microsoft stack: .NET Platform, C#, , .NET Core, Micoroservices, Docker Containers, Azure, Kubernetes Service, DevOps , CI/CD, SQL Server, APIs, Websites, and more
In Frontend: HTML, JavaScript, CSS, jQuery, React, and more.
In addition I can build Websites by WordPress.

Missa ingenting.
Prenumerera på vårt nyhetsbrev. Vi lovar att bara skicka relevant information och aldrig spamma dig.
Setup a Terraform backend on S3 with AWS CloudFormation and CDK
backend
Written by Renato Golia
Terraform is one of the most popular Infrastructure as Code (IaC) tools. Like other IaC tools, it lets you define a descriptive model of your cloud infrastructure and store it in one or more text files. This model describes the desired state of your infrastructure and can be used to reliably deploy, version and update your system.
It uses persisted state data to keep track of the resources it manages. Most production-grade configurations store this state data remotely allowing multiple people to access the state and work together. Also, remotely storing the state increase security because it avoids relying on the computer of the person working on the cloud infrastructure.
Since Terraform is cloud-agnostic, it supports storing the state data in many different ways. Terraform uses the term backend to refer to the system used to store the cloud resources state data and it supports many providers out-of-the-box. Among the many options, one can choose to leverage AWS to store the state data.
The backend using AWS requires an S3 bucket and, optionally, a DynamoDB table to enable state locking to avoid collisions between multiple actors.
We can use the AWS console to create these resources. Once done, we can instruct Terraform to use them by defining a backend
element.
terraform {
backend "s3" {
bucket = "terraform-state"
region = "eu-north-1"
key = "path/to/terraform.tfstate"
dynamodb_table = "terraform-state-lock"
}
}
Now we can use the CLI tool to let Terraform initialize our backend.
$ terraform init
This will create in our bucket a file containing the state of the system represented in our Terraform application. Now, we can add resources and reliably deploy them in the cloud.
As usual, I’m skipping the authentication and authorization bits needed to deal with AWS.
So, everything works. We can create new applications and let Terraform take care of creating and configuring the resources and, most importantly, persist the state file in the shared bucket.
But the more I got used to Terraform or any other IaC tool, the more I got weary of that S3 bucket and DynamoDB table created via the console.

The Problem
Can we use Infrastructure as Code to define the infrastructure needed by Terraform to operate?
Short answer is yes. We can definitely use Terraform to define and deploy a S3 bucket and a DynamoDB table, as shown in this post by The Lazy Engineer.
This would define an infrastructure of a higher order. The problem is that we don’t have a backend for this application and we would be relying on tracking its state on our computer.
Staggered deployment
Looking for a solution, I found this blog post by Shi Han.
In his post, the author suggests using a staggered deployment approach. They first deploy the S3 bucket, then configure the backend to use it and then reconfigure the application to use the bucket to store the state of the bucket itself.
The name they give to the paragraph, The chicken-and-egg problem, is definitely fitting.
Even if it works correctly, I’m not really satisfied by this solution.
Shi Han’s solution is based on a trick that contraddicts one of the corner stones of Infrastructure as Code: your code files should be a valid representation of your system at any given time.
CloudFormation and CDK
How do you break a chicken-and-egg problem? You change the context. If Terraform can’t be used to set up the infrastructure it needs, we can look at other tools. At first I was looking at other backend providers to be used for our higher-order architecture but none of the alternatives caught my eye.
I eventually decided to leverage CloudFormation and its CDK (Cloud Development Kit).
While I am not enthusiastic about using two different techonologies (CloudFormation and Terraform) for the same job (i.e. describe my cloud infrastructure), I am happy enough because:
- CloudFormation is available to all AWS accounts, with no extra setup
- The CDK makes it easy enough to work with CloudFormation by hiding all its quirks
- I consider it acceptable to use different technologies for two different level of abstraction
Careful readers would be wondering if we really solved the chicken-and-egg problem. The answer is yes because CloudFormation takes care of persisting the state of the applications it manages (stacks in CloudFormation’s lingo) in resources already created.
So, let’s see how we can leverage the CDK to define and deploy the infrastructure needed by Terraform’s backend. Specifically, I’ll be writing a CDK application using the C# template.
Preparation
Let’s start by installing the required runtimes needed for working with CDK.
Let’s assume that the following tools have been installed and configured.
Once these are installed, let’s install the npm tool for CDK. We can then validate that the CDK CLI is correctly installed and configured.
$ npm install -g aws-cdk
$ cdk --version
2.51.1 (build 3d30cdb)
Finally, before we begin using CDK to deploy CloudFormation stacks, the CDK needs some required tools to be deployed on the receiving AWS account. This process is called bootstrapping.
# Bootstrap CDK for your account using `cdk bootstrap aws://ACCOUNT-NUMBER/REGION`
$ cdk bootstrap aws://123456789012/eu-north-1
You can read more about bootstrapping your account here
Now everything is ready for us to create our CDK app.
Creating the CDK app
Let’s create our CDK app.
We start creating a folder for the app, and then we use the CDK CLI to create an app based on the C# template.
$ mkdir TerraformBackend
$ cd TerraformBackend
$ cdk init app --language csharp
Once the template is generated, we have a .NET solution that can be customized to include the resources we need.
Customizing the stack
The solution contains a C# project with mainly two files:
Program.cs
contains the code needed to initialize the CDK app.TerraformBackendStack.cs
contains the class that we will use to add our resources
Let’s start by adding the resources to the TerraformBackendStack
. To do so, we simply augment the internal constructor generated by the template.
internal TerraformBackendStack(Construct scope, string id, IStackProps props = null)
: base(scope, id, props)
{
var bucket = new Bucket(this, "terraform-state", new BucketProps
{
Versioned = true,
Encryption = BucketEncryption.S3_MANAGED,
BlockPublicAccess = BlockPublicAccess.BLOCK_ALL
});
var table = new Table(this, "terraform-state-lock", new TableProps
{
TableName = "terraform-state-lock",
BillingMode = BillingMode.PROVISIONED,
ReadCapacity = 10,
WriteCapacity = 10,
PartitionKey = new Attribute { Name = "LockID", Type = AttributeType.STRING }
});
new CfnOutput(this, "TerraformBucket", new CfnOutputProps
{
ExportName = "terraform-state-bucket-name",
Value = bucket.BucketName
});
new CfnOutput(this, "TerraformTable", new CfnOutputProps
{
ExportName = "terraform-state-lock-table-name",
Value = table.TableName
});
}
In the snippet above, I add a S3 bucket whose name will be generated automatically by CloudFormation and a DynamoDB table.
Finally, I added two outputs so that I can easily fetch the name of the bucket and of the table.
Next, I change the Program
so that the stack will be protected from any accidental termination that could be initiated by other actors or with a misclick in the Console. Finally, I make sure that all resources are tagged following my company’s policy.
var app = new App();
var stack = new TerraformBackendStack(app, "TerraformBackend", new StackProps
{
TerminationProtection = true
});
Tags.Of(stack).Add("Project", "TerraformBackend");
Tags.Of(stack).Add("Environment", "Shared");
app.Synth();
With these changes, we’re ready to deploy our stack.
Deploying the stack
The CDK makes it very easy to deploy the stack.
From the root of our CDK project, we simply need to run cdk deploy
to intiate the creation or update of the stack on CloudFormation.
When everything is complete, the CDK CLI will print the outputs that we defined in the TerraformBackendStack
$ cdk deploy
...
Outputs:
TerraformBackend.TerraformBucket = some-very-random-string
TerraformBackend.TerraformTable = terraform-state-lock
Now we can use the two output values to correctly initialize our Terraform applications.
terraform {
backend "s3" {
bucket = "some-very-random-string"
region = "eu-north-1"
key = "path/to/terraform.tfstate"
dynamodb_table = "terraform-state-lock"
}
}
Recap
Infrastructure as Code is becoming more and more of a mindset and we should strive to always follow it. Sometimes the tooling we use has limitations that could stop us.
Terraform’s support for backend infrastructure is one of the many examples. In this post, we explore how we can use AWS CloudFormation and its CDK to circumvent the issue and use IaC to create the infrastructure needed to work with IaC at non-trivial levels.
About the author
Renato is an expert in cloud-based distributed software architecture design and construction, with a focus on .NET and AWS. He has extensive experience in designing and building efficient CI/CD pipelines, as well as defining infrastructure-as-code blueprints.
Renato also has significant management experience, having previously served as CTO and leading a team. In addition to his professional work, Renato is dedicated to contributing to the open source community and developing .NET libraries to enhance developer productivity.
You can contact him on his personal blog, GitHub or LinkedIn
Missa ingenting.
Prenumerera på vårt nyhetsbrev. Vi lovar att bara skicka relevant information och aldrig spamma dig.