Overview of Terraform
- Terraform is a popular DevOps tool for Infrastructure as Code (IaC).
- The course aims to take you from beginner to proficient in Terraform through hands-on labs and simplified concepts.
Course Structure
-
Introduction to Infrastructure as Code (IaC)
- Understanding IaC and its importance in modern IT infrastructure.
- Overview of various IaC tools and their purposes, including insights from the Ultimate Guide to Azure DevOps Certification Course: Pass the Exam with Confidence.
-
Getting Started with Terraform
- Installation of Terraform and introduction to HashiCorp Configuration Language (HCL).
- First lab: Hands-on practice with HCL syntax.
-
Terraform Basics
- Key concepts: Providers, input/output variables, resource attributes, and dependencies.
- Understanding Terraform state and its significance, which is crucial for effective configuration management.
-
Terraform Commands
- Overview of essential Terraform commands and their functions.
- Differences between mutable and immutable infrastructure, a concept also explored in the Introduction to Linux: A Comprehensive Guide for Beginners.
-
Lifecycle Rules and Resource Management
- Managing resource creation and updates with lifecycle rules.
- Understanding data sources and how to link resources.
-
Variables and Outputs
- Using input variables for better code reusability.
- Creating output variables to store and display resource attributes.
-
State Management
- Importance of state files in Terraform and best practices for collaboration.
- Working with remote state storage solutions.
-
Advanced Terraform Features
- Using meta-arguments like
for_each
for resource management. - Specifying provider versions and understanding version constraints.
- Using meta-arguments like
Conclusion
- The course wraps up with hands-on labs to reinforce learning and practical application of Terraform concepts.
- Encouragement to explore further learning paths in DevOps technologies, including Unlocking Azure DevOps with the A400 Certification: A Comprehensive Guide.
[Music] terraform is one of the most popular devops tools for infrastructure as code
and through the next hour or so we will take you from being zero to hero through this comprehensive and hands-on course
in this course we will simplify complex concepts using illustrations and animations and you will gain hands-on
practice through our labs that you can access for free yes our labs open up right in your browser so you don't have
to pay extra for cloud accounts or set up your own infrastructure in this course vision will walk you
through the fundamentals of terraform and help you not only understand the basics but also practice and gain
hands-on experience if you're visiting our channel for the first time don't forget to subscribe as we upload new
videos and courses all the time [Music] [Music]
hello and welcome to this course on terraform for beginners my name is vijayn palazzi and i will be
your instructor for this course in this course we will get started with terraform but first we will take a look
at infrastructure as code or iac and the different types of tools available in iac and their purpose in managing the
modern it infrastructure we will then see the role of terraform in today's it infrastructure
then we will learn how to install terraform this is followed by the basics of
hashicorp configuration language next we have our very first lab where you'll get your hands dirty working with
the hcl syntax next we will understand the basics of terraform such as providers input and
output variables resource attributes and dependencies after this we will take a look at state
and terraform what it is why it is used and the considerations to follow when working with state
we then dig deeper into the fundamentals starting with different commands provided by terraform
this is followed by a lecture where we understand the difference between mutable and immutable infrastructure
next we'll take a look at lifecycle rules in terraform where we'll learn how to manage the ways in which resources
are created after this we have lectures on basic topics such as data sources meta
arguments such as count and for each and finally understand version constraints in terraform
so let's get started [Music] let's now get introduced to our labs to
access the labs go to this link this link is also available in the description below if you're visiting
code cloud for the first time then sign up for free and once in you will find the lab course under your list of
courses the course has multiple scenarios i will let you know when to access the
other labs as you progress through this course so head over to code cloud to access the labs
let's start with how application delivery works in a traditional infrastructure model and how it evolved
with the emergence of technology such as cloud computing and infrastructure as code
let's go back in time and look at how infrastructure was provisioned in the traditional it model
let us consider an organization that wants to roll out a new application the business comes up with the
requirements for the application the business analyst then gathers the needs from the business analyzes it and
converts it into a set of high level technical requirements this is then passed on to a solution
architect the solution architect then designs the architecture to be followed for the
deployment of this application this would typically include the infrastructure considerations such as
the type spec and count of servers that are needed such as those for front-end web servers back-end servers databases
load balances etc following the traditional infrastructure model these would have to be deployed in
the organization's on-premise environment which would mean making use of the assets in the data center
if additional hardware is needed they would have to be ordered via the procurement team
this team will put in a new hardware request with the vendors it can then take anywhere between a few
days to weeks or even months for the hardware to be purchased and delivered to the data center
once received at the data center the field engineers would be in charge of rack and stack of the equipment
the system administrators perform initial configurations and the network administrators make the systems
available on the network the storage admins assign storage to the servers and the backup admins configure
backups and finally once the systems have been set up as per the standards they can
then be handed over to the application teams to deploy their applications this deployment model which is still
quite commonly used today has quite a few disadvantages the turnover time can range between
weeks to months and that's just to get the systems in a ready state to begin the application deployment
this includes the time it takes for the system to be initially procured and then handed over between teams
also scaling up or scaling down the infrastructure on demand cannot be achieved quickly
the overall cost to deploy and maintain this model is generally quite high while some aspects of the infrastructure
profiling process can be automated several steps such as the rack and stack cabling and other deployment procedures
are manual and slow with so many teams working on so many different tasks
chances of human error are high and this results in inconsistent environments another major disadvantage of using this
model is the under utilization of the compute resources the infrastructure sizing activity is
generally carried out well in advance and the servers are size considering the peak utilization
the inability to scale up or down easily means that most of these resources would not be used during off-peak hours
in the past decade or so organizations have been moving to virtualization and cloud platforms to take advantages of
services provided by major cloud providers such as amazon aws microsoft azure google cloud platform etc
by moving to cloud the time to spin up the infrastructure and the time to market for applications are
significantly reduced this is because with cloud you do not have to invest in or manage the actual
hardware assets that you normally would in case of a traditional infrastructure model
the data center the hardware assets and the services are managed by the cloud provider
a virtual machine can be spun up in a cloud environment in a matter of minutes and the time to market is reduced from
several months as in the case of a traditional infrastructure two weeks in a cloud environment
and infrastructure costs are reduced when compared with the additional data center management and human resources
cost cloud infrastructure comes with support for apis and that opens up a whole new
world of opportunity for automation and finally the built-in auto scaling and elastic functionality of cloud
infrastructure reduces resource wastage with virtualization and cloud you could now provision infrastructure with a few
clicks while this approach is certainly faster and more efficient when compared to the
traditional deployment methods using the management console for resource provisioning is not always the ideal
solution it is okay to have this approach when we are dealing with limited number of
resources but in a large organization with elastic and highly scalable cloud environment with immutable
infrastructure this approach is not feasible once provisioned the systems still have
to go through different teams with lot of process overhead that increases the delivery time
and the chances of human error are still at large resulting in inconsistent environments
so different organizations started solving these challenges within themselves by developing their own
scripts and tools while some use simple shell scripts others use programming languages such as
python ruby perl or powershell everyone was solving the same problems trying to automate infrastructure
provisioning to deploy environments faster and in a consistent fashion by leveraging the api functionalities of
the various cloud environments these evolved into a set of tools that came to be known as infrastructure as
code in the next lecture we will see what infrastructure as code is in more detail
in this lecture we will get introduced to infrastructure as code which is commonly known as iac
we will also take a look at commonly used iac tools so earlier we discussed about
provisioning by making use of the management console of various cloud providers
the better way to provision cloud infrastructure is to codify the entire provisioning process
this way we can write and execute code to define provision configure update and eventually destroy infrastructure
resources this is called as infrastructure as a code or iac
with infrastructure as code you can manage nearly any infrastructure component as code such as databases
networks storage or even application configuration the code you see here is a shell script
however it is not easy to manage it it requires programming or development skills to build and maintain
there's a lot of logic that you will need to code and it is not easily reusable
and that's where tools like terraform and ansible help with code that is easy to learn is human readable and
maintained a large shell script can now be converted into a simple terraform
configuration file like this with infrastructure as code we can define infrastructure resources using
simple human readable high level language here is another example where we'll be
making use of ansible to provision three aws ec2 instances making use of a specific ami
although ansible and terraform are iec tools they have some key differences in what they are trying to achieve
and as a result they have some very different use cases we will see these differences next
there are several different tools part of the infrastructure as code family ansible terraform puppet cloud formation
packer salt stack vagrant docker etc although you can possibly make use of
any of these tools to design similar solutions they have all been created to address a very specific goal
with that in mind iec can be broadly classified into three types configuration management
ansible puppet salt stack fall into this category tools used for server templating
docker packer and vagrant fall into this category and finally we have infrastructure
provisioning tools such as terraform and cloud formation let's look at these in a bit more detail
the first type of iec tool that we are going to take a look at is configuration management tools
these include tools like ansible chef puppet and salt stack and these are commonly used to install
and manage software on existing infrastructure resources such as servers databases networking devices etc
unlike the ad-hoc shell scripts that we saw earlier configuration management tools maintain a consistent and a
standard structure of code and this makes it easier to manage and update it as needed
they are also designed to run on multiple remote resources at once an ansible playbook or a role can be
checked into a version control repository this allows us to reuse and distribute
it as needed however perhaps the most important feature of a configuration management
tool is that they are item potent this means that you can run the code multiple times and every time you run it
it will only make changes that are necessary to bring the environment into a defined state
it would leave anything already in place as it is without us having to write any additional code
next let's look at server templating tools these are tools like docker vagrant and
packer from hashicorp that can be used to create a custom image of a virtual machine or a container
these images already contain all the required software and dependencies installed on them
and for the most part this eliminates the need of installing software after a vm or a container is deployed
the most common examples for server templated images are vm images such as those that are offered on
osboxes.org custom amis in amazon aws and docker images on docker hub and other container
registries server templating tools also promote immutable infrastructure unlike
configuration management tools this means that once the vm or a container is deployed it is designed to
remain unchanged if there are changes to be made to the image instead of updating the running
instance like in the case of configuration management tools such as ansible we update the image and then
redeploy a new instance using the updated image we have a section on immutable
infrastructure later in this course where we look at it in much more detail the last type of iec tool which is
specifically of interest for this course is provisioning tools these tools are used to provision
infrastructure components using a simple declarative code these infrastructure components can
range from servers such as virtual machines databases vpcs subnets security groups storage and just about any
service based on the provider we choose while cloud formation is specifically used to deploy services in aws terraform
is vendor agnostic and supports provider plugins for almost all major cloud providers
in the upcoming lecture we will see how terraform helps in provisioning infrastructure
let's now talk about terraform and go over some of its features at a high level
as we discussed terraform is a popular iec tool which is specifically useful as an infrastructure provisioning tool
terraform is a free and open source tool which is developed by hashicorp it installs as a single binary which can
be set up very quickly allowing us to build manage and destroy infrastructure in a matter of minutes
one of the biggest advantages of terraform is its ability to deploy infrastructure across multiple platforms
including private and public cloud such as on-premise vsphere cluster or cloud solutions such as aws gcp or azure to
name a few these are just a few of the many resources that reform can manage
so how does terraform manage infrastructure on so many different kinds of platforms
this is achieved through providers a provider helps terraform manage third party platforms through their api
providers enable terraform manage cloud platforms like aws gcp or azure as we've just seen as well as network
infrastructure like big-ip cloudflare dns apollo alto networks and infoblox as well as monitoring and data
management tools like datadog grafana or xero wavefront and sumo logic databases like influxdb mongodb mysql
postgresql and version control systems like github bitbucket or gitlab terraform supports hundreds of such
providers and as a result can work with almost every infrastructure platform terraform uses hcl which stands for
hashicorp configuration language which is a simple declarative language to define the infrastructure resources to
be provisioned as blocks of code all infrastructure resources can be defined within configuration files that
has a dot df file extension the configuration syntax is easy to read and write and pick up for a beginner
this sample code is used to provision a new ec2 instance on the aws cloud this code is declarative and can be
maintained in a version control system allowing it to be distributed to other teams
we cover the hcl syntax in more detail later in this course we also have a lots of hands-on labs where you will be
practicing working with these files and gain a lot of experience by the end of this course
so we said that the code is declarative but what does declarative mean the code we defined is the state that we
want our infrastructure to be in that's the desired state and this on the right is the current
state where there's nothing terraform will take care of what is required to go from the current state to
the desired state without us having to worry about how to get there so how does terraform do that
terraform works in three phases init plan and apply during the init phase terraform
initializes the project and identifies the providers to be used for the target environment
during the plan phase trafform drafts a plan to get to the target state and then in the apply phase terraform
makes the necessary changes required on the target environment to bring it to the desired state
if for some reason the environment was to shift from the desired state then a subsequent terraform apply will bring it
back to the desired state by only fixing the missing component every object that reform manages is
called a resource a resource can be a compute instance a database server in the cloud or in a
physical server on premise that terraform manages terraform manages the life cycle of
resources from its provisioning to configuration to decommissioning terraform records the state of the
infrastructure as it is seen in the real world and based on this it can determine what actions to take when updating
resources for a particular platform terraform can ensure that the entire infrastructure is always in the defined
state at all times the state is a blueprint of the infrastructure deployed by terraform
terraform can read attributes of existing infrastructure components by configuring data sources
this can later be used for configuring other resources within terraform terraform can also import other
resources outside of terraform that were either created manually or by the means of other iec tools and bring it under
its control so that it can manage those resources going forward therefore cloud and terraform enterprise
provide additional features that allow simplified collaboration between teams managing infrastructure
improved security and a centralized ui to manage reform deployments all these features make perform an
excellent enterprise grade infrastructure provisioning tool well that was a quick introduction to
terraform at a high level so let's dive in and explore all of these in much more detail in the
upcoming lectures in this section we will learn how to install terraform
terraform can be downloaded as a single binary or an executable file from the terraform download section in
www.terraform.io installing terraform is as simple as downloading this file and copying it to
the system path once installed we can check the version by running the command terraform version
the latest version of terraform as of this recording is 0.13 and will be making use of this version
throughout the course terraform is supported on windows mac and several other linux based
distributions please note that all the examples and labs used in this course will make use
of terraform running on a linux machine and specifically the version 0.13 and that's it we can now start deploying
resources using terraform as stated earlier terraform uses configuration files which are written in
hcl to deploy infrastructure resources these files have a dot tf extension and can be created using any text editor
such as notepad or notepad plus plus for windows or command line text editors such as vm or emac in linux
or it could be any ide of your choice so what is the resource a resource is an object that therefore manages
it could be a file on a local host or it could be a virtual machine on the cloud such as an ec2 instance or it
could be services like s3 buckets ecs dynamodb tables imuses rime groups roles policies etc
or it could be resources on major cloud providers such as the compute and app engine from gcp
databases on azure azure active directory etc there are literally hundreds of
resources that can be provisioned across most of the cloud and on-premise infrastructure using terraform
we will look into some of these examples later in this course but for the first few sections we will stick to two very
easy to understand resources the local file type of resource and a special kind of resource called a random
pet it is important to use a simple resource type to really understand the basics of
terraform such as the life cycle of resources the hcl format etc once we gain a good understanding of the
basics we can easily apply that knowledge to other real-life use cases and we will see that in later sections
of this course in this lecture we will understand the basics of hcl which is hashicorp
configuration language and then create a resource using terraform
let us first understand the hcl syntax the hcl file consists of blocks and arguments
a block is defined within curly braces and it contains a set of arguments in key value pair format representing the
configuration data but what is a block and what arguments does it contain
in its simplest form a block entire form contains information about the infrastructure platform and a set of
resources within that platform that we want to create for example let us consider a simple
task we want to create a file in the local system where terraform is installed
to do this first let us create a directory called terraform dash local dash file under the
slash root directory this is the directory under which we will create the hcl configuration file
once we change into this new directory we can create a configuration file called local.tf
and within this file we can define a resource block like this and inside the resource block we specify
the file name to be created as well as its contents using the block arguments let us break down the local.tf file to
understand what each line means the first element in this file is a block
this can be identified by the curly braces inside the type of block we see here is called
the resource block and this can be identified by the keyword called resource in the beginning
of the block following the keyword called resource we have the declaration of the resource type that we want to
create this is a fixed value and depends on the provider where we want to create the
resource in this case we have the resource type called local file
a resource type provides two bits of information first is the provider
which is represented by the word before the underscore in the resource type here we are making use of the local
provider the word following the underscore which is file in this case represents the type
of resource the next and final declaration in this resource block is the resource name
this is the logical name used to identify the resource and it can be named anything
but in this case we have called it pet as the file we are creating contains information about pets
and within this block and inside the curly braces we define the arguments for resource which are written in key value
pair format these arguments are specific to the type of resource we are creating which in
this case is the local file the first argument is the file name to this we assign the absolute part to
the file we want to create in this example it is set to slash root pets dot txt
now we can also add some content to this file by making use of the content argument
to this let us add the value we love pets the words file name and content are
specific to the local file resource we want to create and they cannot be changed
in other words the resource type of local file expects that we provide the argument of file
name and content each resource type has specific arguments that they expect
we will see more of that as we progress through the course and that's it we now have a complete hcl
configuration file that we can use to create a file by the name of patch.text this file will be created in the slash
root directory and it will contain a single line of data the resource block that we see here is
just one example of the configuration blocks used in hcl but it is also a mandatory block needed to deploy a
resource using terraform here is an example of a resource file created for provisioning an aws ec2
instance the resource type is aws underscore instance
we named the resource web server and the arguments that we have used here is the ami id and the instance type
here is another example of a resource file used to create an aws s3 bucket the resource type in this case is aws
underscore s3 underscore bucket the resource name that we have chosen is data and the arguments that we have
provided as the bucket name and the acl a simple terraform workflow consists of four steps
first write the configuration file next run the terraform init command and after that review the execution plan
using the terraform plan command finally once we are ready apply the changes using the terraform apply
command with the configuration file ready we can now create the file resource using the
terraform commands as follows first run the terraform init command this command will check the
configuration file and initialize the working directory containing the tf file one of the first things that this
command does is to understand that we are making use of the local provider based on the resource type declared in
the resource block it will then download the plugin to be able to work on the resources declared
in the dot tf file from the output of this command we can see that terraform init has installed a
plugin called local next we are ready to create the resource but before we do that if you want to see
the execution plan that will be carried out by terraform we can use the command terraform plan
this command will show the actions that will be carried out by terraform to create the resource
terraform knows that it has to create resources and this is displayed in the output similar to a diff command in git
the output has a plus symbol next to the local file type resource called pet this includes all the arguments that we
specified in the dot tf file for creating the resource but you'll also notice that some default
or optional arguments which we did not specifically declare in the configuration file is also displayed on
the screen the plus symbol implies that the resource will be created
now remember this step will not create the infrastructure resource yet
this information is provided for the user to review and ensure that all the actions to be performed in this
execution plan is desired after the review we can create the resource
and to do this we will make use of the terraform apply command this command will display the execution
plan once again and it will then ask the user to confirm by typing yes to proceed
once we confirm it will proceed with the creation of the resource which in this case is a file
we can validate that the file was indeed created by running the cad command to view the file
we can also run the terraform show command within the configuration directory to see the details of the
resource that we just created this command inspects the state file and displays the resource details
we will learn more about this command and the state in a later lecture so we have now created our first
resource using terraform before we end this section let us go back and look at the configuration
blocks in local.tf file in this example we used the resource type of local file and learned that the
keyword before the underscore here is the provider name called local but how do we know that
how do we know what resource types other than local file are available under the provider called local
and finally how do we know what arguments are expected by the local file resource
earlier we mentioned that terraform supports over 100 providers including the local provider we have used in this
example other common examples are aws to deploy resources in amazon aws cloud
azure gcp alicloud etc each of these providers have a unique list of resources that can be created on
that specific platform and each resource can have a number of required or optional arguments that are
needed to create that resource and we can create as many resources of each type as needed
it is impossible to remember all of these options and of course we don't have to do that
terraform documentation is extremely comprehensive and it is the single source of truth that we need to follow
if we look up the local provider within the documentation we can see that it only has one type of resource called the
local file under the arguments section we can see that there are several arguments that
the resource block accepts out of which only one is mandatory the file name the rest of the arguments are optional
that's it for this lecture now let's head over to the hands-on labs and practice working with hcl and create
our first resource using terraform this is an introductory video to give you a quick tour of the hands-on labs
available in this course each lab is specially designed to help you practice and gain knowledge on the
topics that we learned in the associated lectures and demos click on this button to open the
hands-on lab and wait for the lab environment to load it can take up to a minute
the lab interface has two sections the terminal for the iac server is on the left hand side
and this is where you will be running the commands and carrying out tasks based on the questions that's asked
the questions can be found in the quiz portal which is on the right hand side of the split screen
there are two types of questions that you can expect the first type is a multiple choice
question where you'll have to look up the answer for a specific question some of these may be straightforward but
for some of them you will have to inspect the terraform configuration from the terminal
for example to answer this question we have to navigate to a specific path and inspect the extension of the file that
is created inside one way to do this is from the terminal on the left hand side we can see that there is a file called
main.tf created in this directory so let's go back to the quest portal and select dot tf as our answer for this
question the second type of question is the configuration test
for these questions you'll have to carry out specific tasks such as writing terraform configuration files and
running the terraform workflow to create update or destroy infrastructure for example here we have to run the
terraform init command within the same configuration directory this can again be done using the
terminal on the left hand side if you are unsure how to attempt a question click on the hint button which
will provide you helpful hints and point you to the right direction once you get to the aws sections of this
course you will get to work with aws services the labs are integrated with an aws test
framework which will allow you to build update and destroy resources on aws for example this question requires us to
create an im user called mary and then run terraform init in the configuration directory
the hands-on labs of this course are slightly different from our other courses
to optimize the learning experience for terraform we have integrated the visual studio code ide into the lab
this will allow you to make use of the built-in terraform extensions that will help you write terraform configurations
file quickly and efficiently to access visual studio code click on the vs code tab on the top of the
terminal which will open it up in a new tab from here we can open the configuration
directories on the iic server and navigate to the path specified in the question
we can also make use of the terraform extensions which are pre-installed to answer this question let's create a
file called imuser.tf in the im directory to easily create a resource block lets
use the command completion feature this will load the template for the resource block
next for the resource type just type in aws and press control and spacebar together
this will list all the resource types available in aws we can now type iam and select the
appropriate resource type which in this case is aws iam user we can also look up arguments for this
resource block the same way inside the block press control and space bar to see a list of all available
arguments and choose the ones that you need once ready run terraform commands from
the terminal we can also do this from within vs code we can right click inside the directory
and click on open an integrated terminal this will open up the bash terminal on the bottom of the screen
well that's it for this video we wish you an excellent learning experience ahead
in this lecture we will learn how to update and destroy infrastructure using terraform
in the previous lecture we saw how to create a local file now let us see how we can update and
destroy this resource using terraform first let us try to update this resource let us add in a file permission argument
to update the permission of the file to 0700 instead of the default value of 077 this will remove any permission for
everyone else except the owner of the file now if we run terraform plan we will see
an output like this from the output we can see that the resource will be replaced
the minus plus symbol in the beginning of the resource name in the plan implies that it will be deleted and then
recreated the line with the command that reads forces replacement is responsible for
the deletion and recreation and in this example this is caused by the file permission argument that we
added to the configuration file even though the change we made was trivial terraform will delete the old
file and then create a new file with the updated permissions this type of infrastructure is called an
immutable infrastructure we saw this briefly when we discussed the different types of iec tools
if you want to go ahead with the change use the terraform apply command and then type yes when prompted
upon confirmation the existing file is deleted and recreated with the new permissions
to delete the infrastructure completely run the terraform destroy command this command shows the execution plan as
well and you can see the resource and all of its arguments have a minus symbol next to them
this indicates that the resource will be destroyed to go ahead with the destroy confirm yes
in the prompt this will delete all the resources in the current configuration directory
in this example it deletes the file root pets dot txt that's it for this lecture let's head
over to the hands-on labs and practice updating and deleting infrastructure resources using terraform
in this lecture let's take a look at providers in more detail we saw in the previous lecture that
after we write a terraform configuration file the first thing to do is to initialize the directory with the
terraform init command when we run terraform init within a directory containing the configuration
files terraform downloads and installs plugins for the providers used within the configuration
these can be plugins for cloud providers such as aws gcp azure or something as simple as the local provider that we use
to create a local file type resource terraform uses a plugin based architecture to work with hundreds of
such infrastructure platforms terraform providers are distributed by hashicorp and are publicly available in
the terraform registry at the url registry.terraform.io there are three tiers of providers
the first one is the official provider these are owned and maintained by hashicorp and include the major cloud
providers such as aws gcp and azure the local provider that we have used so far is also an official provider
the second type of provider is a verified provider a verified provider is owned and
maintained by a third party technology company that has gone through a partner provider process with hashicorp
some of the examples are the big-ip provider from fi networks heroku digitalocean etc
and finally we have the community providers that are published and maintained by individual contributors of
the hashicorp community terraform init command when run shows the version of the plugin that is
being installed in this case we can see that the plugin name hashicorp slash local with the
version 2.0.0 has been installed in the directory the terraform init is a safe command and
it can be run as many times as needed without impacting the actual infrastructure that is deployed
the plugins are downloaded into a hidden directory called dot terraform slash plugins in the working directory
containing the configuration files in our example the working directory is slash root slash terraform dash local
dash file the plugin name that you see here
hashicorp slash local is also known as the source address this is an identifier that is used by
terraform to locate and download the plugin from the registry let's take a closer look at the format of the name
the first part of the name which in this case is hashicorp is the organizational namespace
this is followed by the type which is the name of the provider such as local other examples of providers are
aws azure rm google random etc the plugin name can also optionally have a hostname in front the host name is the
name of the registry where the plugin is located if omitted it defaults to
registry.reform.io which is hashicob's public registry given the fact that the local provider
is stored in the public terraform registry within the hashicorp namespace the source address for it can be
represented as registry.terraform.io hashicorp local
or simply by hashicorp local by omitting the hostname by default terraform installs the latest
version of the provider provider plugins especially the official ones are constantly updated with newer
versions this is done to bring in new functionality or to add in bug fixes and these can introduce breaking changes
to your code we can lock down our configuration files to make use of a specific provider
version as well we will see how to do that later in this course now let's take a look at the
configuration directory and the file naming conventions used in terraform so far we have been working with a
single configuration file called local.tf and this is within the directory called
terraform dash local dash file which is our configuration directory this directory is not limited to one
configuration file we can create another configuration file like this
the cad.tf is another configuration file that makes use of the same local underscore file resource
when applied it will create a new file called cad.txt terraform will consider any file with
the tf extension within the configuration directory another common practice is to have one
single configuration file that contains all the resource blocks required to provision the infrastructure
a single configuration file can have as many number of configuration blocks that you need
a common naming convention used for such a configuration file is to call it the main.tf
there are other configuration files that can be created within the directory such as the variables.tf outputs.tf and
providers.tf we will talk more about these files in the later sections of this course
now let's head over to the hands-on labs and explore working with providers in this lecture we will see how to use
multiple providers and resources in terraform until now we have been making use of a
single provider called local to deploy a local file in the system terraform also supports the use of
multiple providers within the same configuration to illustrate this
let's make use of another provider called random this provider allows us to create random
resources such as a random id a random integer a random password etc let us see how to use this provider and
create a resource called random pet this resource type will generate a random pet name when applied
by making use of the documentation we can add a resource block to the existing main.ta file like this
here we are making use of the resource type called random pet in an earlier lecture we saw that the
resource type can be broken down to two parts the key word before the underscore is
the provider which in this case is random the keyword following it is the resource
type which is the pet let's call this resource mypet within this resource block we will use
three arguments one is the prefix that is to be added to the pet name
the second argument is the separator between the prefix and the pet name that is
generated the final argument is the length which is the length of the pet name to
be generated in words our main.tf file now has resource definition for two different
providers one resource of the local file type that we have already created earlier
and another resource of type random pet before we generate an execution plan and create these resources we have to run
the terraform init command again now this is a mandatory step as the plugin for the random provider should be
initialized in the configuration directory before we can make use of it in the command output of the terraform
init we can see that the local provider was previously installed and it will be reused
the plugin for the random provider on the other hand will be installed as it was not used before
we can now run the terraform plan to review the execution plan as expected the local file resource
called pet will not be updated as it is unchanged from the previous apply
a new resource by the name of my pet will be created based on the new resource block that we just added
now let's apply the configuration using terraform apply as expected the local file resource is
left as it is but now a new resource has been created which is called my pet
random provider is a logical provider and it displays the results of the pet name on the screen like this
here an attribute called id which contains the name of the pet is written by the apply command
before we move on please note that in our illustration the dog icon stands for a pet
and we'll be making use of it throughout this course the random pet can generate any pet name
and it does not have to be specifically a dog and now let's head over to the hands-on
labs and explore working with multiple providers in terraform in this lecture
we will see how to make use of variables in terraform we have used several arguments in our
terraform block so far for the local file we have a file name and content
and for the random pet resource we have used the prefix separator and length as the arguments
since these values are directly defined within the main configuration files they are considered to be hard coded values
hard coding values is not a good idea for one this limits the reusability of the code
which defeats the purpose of using iac we want to make sure that the same code can be used again and again to deploy
resources based on a set of input variables that can be provided during the execution
and that is where input variables come into the picture just as in any general purpose
programming language such as bash scripting or powershell we can make use of input variables in
terraform to assign variables let us create a new configuration file called variables.tf
and define the values like this the variables.tf file just like the main.tf file consists of blocks and
arguments to create a variable use the keyword called variable
this is followed by the variable name this can be named anything but as a standard use an appropriate name such as
the argument name for which we are using the variable within this block we can provide a
default value for the variable this is an optional parameter but it is a quick and simple way to assign values
to the variables we will see the other methods to do so in a later lecture
great now we have our variable configuration file but
how do we use it within the main.tf file to do this we can replace the argument values with the variable names prefixed
with a war like this when using variables you do not have to enclose the values inside double quotes
as you would when providing actual values and using the same execution flow that
we have seen many times by now we can create the resources using terraform plan followed by the terraform
apply the resources have now been created as expected
now if you want to make an update to the resources by making changes to the
existing arguments we can do that by just updating the variables.tf file the main.tf need not be modified
for example let us update the local file resource to create the file at the same location
but with an updated content that reads my favorite pet is mrs whiskers and for the random pet resource let's
change the length of the pet name to two we can do that like this as expected when we run terraform apply
it will recreate the resources the content of the file has been changed and the pet name now has two words
following the prefix before we conclude this lecture here is an example of what our
configuration files would look like when creating an ec2 instance in aws with terraform while making use of input
variables don't worry if the resource block and the arguments are unfamiliar we have a
separate lecture where we'll be making use of aws resources later in the course in this lecture we will take a close
look at the variables block in terraform first let us look at the different arguments that a variable block uses
the variable block in terraform access three parameters the first one which we have already used
is the default parameter this is where we specify the default value for a variable
the others are type and description description is optional but it is a good practice to use this argument to
describe what the variable is used for the type argument is optional as well but when used it enforces the type of
variable being used the basic variable types that can be used are string
number and boolean string variables as we have seen in our example so far accept a single value
which can be alpha numeric that is consisting of alphabets and numbers the number variable type accepts a
single value of a number which can be positive or negative and the boolean variable type accepts a
value of true or false the type parameter as mentioned previously is optional
if it is not specified in the variable block it is set to the type any by default
besides these three simple variable types trafform also supports additional types such as list map set object and
tuple let us now see how to use all of these in terraform
let us start with list a list is a numbered collection of values and it can be defined like this
in this example we have a variable called prefix that uses a list of values mr mrs and sir
but why do we call it a numbered collection well that is because each value which is
also known as an element can be referenced by the number or index within that list
the index of a list always begins at zero in this case the first element of the
list at index 0 is the word mister the element at index 1 is misses and the final element at index 2 is sir
these variables can be accessed within a configuration file like this with the index specified within square
brackets hence the expression word.prefix with index 0
uses the value mister one.prefix with the index 1 uses the value missus
and with index 2 it uses the value of sir next let us look at the type called map
a map is data represented in the format of key value pairs in the variables.ta file let us create a
new variable called file content with the type set to map in the default values we can specify as
many key value pairs enclosed within curly braces here the statement 1 and statement 2 are
keys and the string data following them are values now to access a specific value from the
map within the terraform configuration file we can make use of key matching in this case we want the content of the
local file resource to be the value of the key called statement2 and for that we use an expression
var.file content which is the name of the map type variable followed by the matching key within square brackets
we can also combine type constraints for example if you want a list of string type elements we can declare it like
this to use a list of numbers change it like this
if the variable values used do not match the type of constraint the terraform commands will fail
in this example we have used the type list where the element should be of type number
but the default values are all of type string now when we run terraform commands such
as a plan or an apply you will see an error like this that states that the default value is not compatible with the
variable type constraint and that a number is required and not a string
and the same is applicable with maps as well we can use type constraints to make sure
that the values of a map are of a specific type in the first variable block we are using
a map of type string and in the second one we are making use of a map that uses numbers
let us now look at sets set is similar to a list the difference between a set and a list
is that a set cannot have duplicate elements in these examples we have a variable
type of set of strings or a set of numbers the examples on the left are good but
the ones on the right aren't they have duplicate values in them that will throw an error
the default values are declared just like you would do for a list but remember that there shouldn't be any
duplicate values here the next type of variable that we are going to look at are objects
with objects we can create complex data structures by combining all the variable types that we have seen so far
for example let us consider a new variable called bella which is the name of a cat
this variable is used to define the different features of the sketch such as its name which is a string
the color which is a string as well h which is a number the food that it eats which is a list of
strings and a boolean value indicating if it's a favorite bet or not
let us now assign some values to this variable let us use name is equal to bella
color is equal to brown age is equal to seven food is fish chicken and turkey
and favorite bet which is set to true and we can use the same default values within a variable block like this
the last variable type that we are going to look at is tuples tuple is similar to a list and consists
of a sequence of elements the difference between a tuple and a list is that list uses elements of the
same variable type such as string or number but in case of tuple we can make use of
elements of different variable types the type of variables to be used in a tuple is defined within the square
brackets in this example we have three types of elements defined the first is a string
second is a number and finally a boolean the variables to be passed to this should exactly be 3 in number and of
that specific type for it to work here we have passed the value of cat to the string element the number 7 to the
number element and true to the boolean adding additional elements or incorrect type will result in an error as seen
here if you add an additional value of dog to the variable it will fail as the tuple
only expects three elements of type string number and boolean that's it for this lecture let's head
over to the hands-on labs and explore working with variable types in terraform in this lecture we will take a look at
the different ways in which we can make use of input variables in terraform so far we have created input variables
in terraform and assign default values to it based on a variable type this is just one of the ways to pass in
values to the variable earlier we learned that the default parameter in a variable block is
optional this means that we can very well have our variable block look like this
but what would happen if we run terraform commands now
when we are on terraform apply we will be prompted to enter values for each variable used in an interactive
mode if you do not want to supply values in an interactive mode we can also make use
of command line flags like this with the terraform command we can make use of the dash war option
with the variable name equals to the value format we can pass in as many variables as we
want with this method by making use of the dash or flag multiple times we can also make use of environment
variables with the tf underscore var underscore followed by the name of a declared variable like this
in this example the tf underscore war underscore file name sets the value of the variable called filename to the
value slash root pets.txt and similarly the variable called length now has the value of 2.
and finally when we are dealing with a lot of variables we can load values by making use of variable definition files
like this these variable definition files can be named anything but should always end in
either.tfrs or tfos.json here we have declared
variables and the values in a file called terraform.tf-force if you look at the syntax used to create
this file you'll observe that this is using the same syntax of an hcl file but it only consists of variable assignments
the variable definition file if called terraform.tfrs or
teform.tfrs.json or by any other name endingwith.auto.efrs or dot auto.tfs.json
will be automatically loaded by terraform if you use any other file name such as
variable.tfrs for example you will have to pass it along with a command line flag called dash war dash
file like this finally it is important to note that we can use any of the options that we have
seen in this lecture to assign values to the variables but if we use multiple ways to assign
values for the same variable terraform follows the variable definition precedence to understand which value it
should accept to illustrate this let us make use of a simple example
in this case we have a main configuration file with a single resource
a local file which will create file at a path declared in a variable called file name
in the variables.tf file we have not specified a default value for this variable
and we have assigned different values to this variable in multiple ways we have exported the environment
variable called tf underscore y underscore file name with the value of slash root slash cats.text
the terraform.ta4 file has a value of slashroot patch.txt for the same variable
we have also made use of a variable definition file with the name variable.auto.tfors
with the value of root mypet.txt
and finally we are also making use of the dash war option while running the terraform apply command with the value
of slash root best pet dot text so in this case
which one of these values would be accepted terraform follows a variable definition
precedence order to determine this first it loads the environment variables next the value in the terraform.tf wars
file this is followed by any file that ends with the dot auto.tfrs
or dot auto.tfrs.json in an alphabetical order and finally terraform considers the
command line flag of dash war or dash war dash file which takes the highest priority and will overwrite any of the
previous values in this case the variable file name will be assigned the value of slashroot
bestpet.txt that's it for this lecture let's head over to the hands-on lab and practice
working with the concepts that we learned in this lecture in this lecture we will learn how to
link two resources together by making use of resource attributes in the last few lectures we saw how to
use variables to improve the reusability of our code right now we have two resources in our
configuration file each of this resource has a set of arguments that are used to create that
resource for the file resource we have used the file name and content as the arguments
and for the pet resource we have used the prefix separator and length when this configuration is applied
terraform creates a file and a random pet resource the name of the random pet is displayed
on the screen as an id which is mr bull in this case as it stands there is no dependency
between these two resources but this rarely ever happens in a real world infrastructure provisioning process
there are bound to be multiple resources that are dependent on each other so
what if you want to make use of the output of one resource and use it as an input for another one
what if we want the content of the file to use the name generated by the random pet resource
currently the content of the file is set to my favorite pet is mr cat but what if we want it to be set to the
name that is generated by the random pet resource to understand this let us jump back to
the documentation for the random pet resource in registry.terraform.io you would have noticed that there are
plenty of examples which are provided in the documentation including examples for the arguments but
there is also a section called attribute reference which provides the list of attributes returned back from the
resource after you run a terraform apply in this case the random pet resource returns just one attribute called id
which is of type string the id is the pet name generated after we run terraform apply as we have seen
before our goal is to make use of the attribute called id
and to make use of it as the content of the local file resource for this we can make use of an
expression like this this expression is used to reference the attribute from the resource called mypet
the syntax for using this reference expression is resource type followed by the resource name and the attribute to
be used all of which are separated by a period or a dot the resource type in this example is
random pet this is followed by mypet which is the resource name and the attribute uses id
you would have also noticed that we are making use of the dollar symbol followed by the expression which is enclosed
within curly braces this is known as an interpolation sequence
since the content argument already uses a string type data this sequence is used to evaluate the expression given within
the curly braces convert the result to a string and then insert it to the final string like this
now let us apply the changes made and allow terraform to recreate the local file
in the output you can see that the content is being replaced to our desired value and it contains the name of the
pet that was generated by the random pet resource that's it for this lecture now let's
head over to the hands-on labs and explore working with reference expressions and resource attributes in
terraform in this lecture we will take a look at the different types of resource
dependencies in terraform in the previous lecture we saw how to link one resource to another using
reference attributes by making use of reference expression and interpolation we were able to make
use of the output of the random bet resource as an input for the local file resource
now when terraform creates these resources it knows about the dependency since the local file resource depends on
the output of the random battery source as a result it uses the following order to provision them
first terraform creates the random pet resource and then it creates the local file resource
when resources are deleted terraform deletes it in the reverse order the local file first and then the random pet
this type of dependency is called the implicit dependency here we are not explicitly specifying
which resource is dependent on which other resource terraform figures it out by itself
however there is another way to specify dependency within the configuration file for example
let us make use of the older configuration file without making use of the reference expression for the file
content if you still want to make sure that the local file resource is created after the
random pet we can do this by using the depends on argument like this here we have added a depends on argument
inside the resource block for the local file and we have provided a list of dependencies that include the random pet
resource called the mypet this will ensure that the local file is only created after the random pet
resources created this type of dependency is called an explicit dependency
explicitly specifying a dependency is only necessary when a resource relies on some other resource indirectly and it
does not make use of a reference expression as seen in this case in the later sections of this course we
will see how and when to use this in a real-world use case now let's head over to the hands-on labs
and explore working with resource dependencies in terraform let us now look at output variables in
terraform so far we have used input variables and reference expressions in our terraform
configuration files along with input variables terraform also supports output variables
these variables can be used to store the value of an expression in terraform for example let us go back to the
configuration file that we used in the previous lecture we already know that the random bet
resource will generate a random pet name using the attribute called id when we apply the configuration
to save this id in an output variable called pet name we can create an output block like this
the syntax used to create this output block is the keyword called output followed by the name that we want to
call this variable inside this block the mandatory argument for value is the reference expression
we can also add a description which is an optional argument to describe what this output variable will be used for
once this block has been created when we run terraform apply we can see that the output variable is printed on the screen
once the resource has been created we can also make use of the terraform output command to print the value of the
output variables the command terraform output by itself will print all the output variables
defined in all the files in the current configuration directory we can also use this command
specifically to print the value of an existing output variable like this now you may wonder where we can make use
of these output variables we already saw that dependent resources can make use of reference expressions to
get the output from one resource block as an input to another block as such output variables are not really
required here the best use of terraform output variables is when you want to quickly
display details about a provision resource on the screen or to feed the output variables to other
iac tools such as an add-on script or ansible playbook for configuration management and testing
now let's head over to the hands-on labs and practice working with output variables in this lecture we will see
the purpose of using state in terraform we already saw how terraform uses state file to map the resource configuration
to the real-world infrastructure this mapping allows terraform to create execution plans when a drift is
identified between the resource configuration files and the state hence a state file can be considered to
be a blueprint of all the resources that are for managers out there in the real world
when terraform creates a resource it records its identity in the state be it the local file resource that
creates a file in the machine a logical resource such as the random pet which just throws out the random pet name or
resources in the cloud each resource created and managed by terraform would have a unique id which
is used to identify the resources in the real world besides the mapping between resources in
the configuration and the real world the state file also tracks metadata details such as resource dependencies
earlier we learned that our form supports two types of dependencies the implicit and the explicit
if we inspect the example configuration file we can see that we have three resources to provision here the local
file resource called pet depends on the random pet resource this is evident from the content
argument in the local file resource block that uses a reference to the random pet resource
the local file resource called cat is unrelated to the other and hence it can be created in parallel with the random
pet resource when we apply this configuration the random battery source called mypet and
the local file called cat can be created first at the same time but the local file resource called pet
can only be created after the random pet resources created we can see that the local file with the
name cat and the random pet resource with the name mypet are the first resources to be created
once that is done only then is the local file resource called pet created until now we do not rely on state for
the provisioning but what if we decide to delete the random
pet resource and the dependent local file from the configuration let us now look at what happens when we
remove resources from the file for example we remove the local file and the random pet resource from the file
if we were to apply the configuration now terraform knows that it has to delete these resources
however in which order does it delete it should it delete the random pet resource first or the local file
the information about the resource dependency is no longer available in the configuration file as we have removed
those lines from it this is where terraform relies on the state and the fact that it tracks
metadata within the state file we can clearly see that the local file resource called pet
has a dependency on the random pet resource since these two resources have now been
removed from the configuration terraform now knows that it should delete the local file first followed by the random
resource one other benefit of using state is performance
when dealing with a handful number of resources it may be feasible for terraform to reconcile state with the
real world infrastructure after every single terraform command such as plan or apply
but in the real world terraform would manage hundreds and thousands of such resources
and when these resources are distributed to multiple providers and especially those that are on the cloud it is not
feasible for terraform to reconcile state for every terraform operation this is because it would take several
seconds to several minutes in some cases for terraform to fetch details about every single resource from all the
providers which are configured for larger infrastructures this may prove to be too slow
in such cases the terraform state can be used as a record of truth without having to reconcile
this would improve the performance significantly terraform stores a cache of attribute
values for all resources in the state and we can specifically make terraform to refer to the state file alone while
running commands and bypass having to refresh state every time to do this we can make use of the dash
refresh is equal to false flag with all the terraform commands that make use of state
when we run the plan with this flag you can see that our form does not refresh state instead it relies on the cached
attributes and in this example the content which has changed in the configuration file
and as a result the execution plan plots a resource replacement the final benefit of state that we are
going to look at is collaboration when working as a team as we have seen in the previous lectures
the terraform state file is stored in the same configuration directory in a file called terraform.tf state
in a normal scenario this means that the state file resides in the folder or a directory in the end user's laptop
this is all right when starting off with terraform learning and implementing small projects individually however this
is far from ideal when working as a team every user in the team should always have the latest state data before
running terraform and make sure that nobody else runs terraform at the same time
failure to do so can result in unpredictable errors as a consequence in such a scenario it is highly
recommended to save this terraform state file in a remote data store rather than to rely on a local copy
this allows the state to be shared between all members of the team securely examples of remote state stores are
amazon web services s3 service hashicorps console and terraform cloud we will learn more about remote state
stores in much more detail in a later section we will also learn about reform cloud and a separate section of its own
now let's head over to the hands-on labs and explore working with terraform state in the previous lectures we learned
about terraform state and its benefits terraform state is the single source of truth for terraform to understand what
is deployed in the real world however there are a few things to keep note of when working with state and we
will learn about them in this lecture state is a non-optional feature in terraform
however there are a few considerations first one is that the state file contains sensitive information
within it it contains every little detail about our infrastructure here is an example snippet of a state
file for an aws ec2 instance which is essentially a virtual machine on the aws cloud
this state file consists of all the attributes for virtual machine that is provisioned such as the allocated cpus
the memory operating system or the image used type and size of disks etc it also stores information such as the
ip address allocated to the vm and the ssh key pair used etc for resources such as databases the
state may also store initial passwords when using local state the state is stored in plain text json files
and as you can see this information can be classified as sensitive information and as a result we need to
make sure that the state file is always stored in a secure storage so we have two kinds of files in our
configuration directory the terraform state file that stores state of the infrastructure and the terraform
configuration files that we use to provision and manage infrastructure when working as a team it is considered
a best practice to store terraform configuration files in distributed version control systems such as github
git lab or bitbucket however going to the sensitive nature of the state file it is not recommended to
store them in git repositories instead store the state in remote backend systems such as aws s3 google
cloud storage azure storage terraform cloud etc we will see how to work with remote
state back-ends in a dedicated section of its own but for now it's important to make a note of these considerations
terraform state is a json data structure that is meant for internal use within terraform
we should never manually attempt to edit the state files ourselves however there would be situations where
we may want to make changes to the state file and in such cases we should rely on terraform state commands
we will cover these in a later section of the course until now we have seen quite a few
terraform commands in action such as the terraform init plan and apply
let us now take a look at some more commands available in terraform the first command we will take a look at
is the terraform validate command once we write our configuration file it's not necessary to run terraform plan
or apply to check if the syntax used is correct instead we can make use of the terraform
validate command like this and if everything is correct with the file we should see a successful
validation message like this if there's an error in the configuration file
the validate command will show you the line in the file that is causing the error with the hints to fix it
in this example we have used an incorrect argument for the local file resource
it should be file permission and not file permissions the next command that we are going to
see is the terraform fmt or the terraform format command this command scans the configuration
files in the current working directory and formats the code into a canonical format
this is a useful command to improve the readability of the terraform configuration file
when we run this command the files that are changed in the configuration directory is displayed on
the screen the terraform show command prints out the current state of the infrastructure
as seen by terraform in this example we have already created the local file resource
and when we run the show command it displays the current state of the resource including all the attributes
created by terraform for that resource such as the file name file and directory permissions content and id of their
source additionally we can make use of the json flag to print the contents in an
json format to see a list of all providers used in the configuration directory use the
terraform providers command you can also make use of the mirror sub command to copy provider plugins needed
for the current configuration to another directory like this this command will mirror the provider
configuration in a new path slash root slash terraform slash new underscore local underscore file
we saw how to use terraform output variables in one of the previous lectures
if you want to print all output variables in the configuration directory use the command terraform output you can
also print the value of a specific variable by appending the name of the variable to the end of the output
command like this the terraform refresh command is used to sync terraform with the real world
infrastructure for example if there are any changes made to a resource created by terraform
outside its control such as a manual update the terraform refresh command will pick it up and update the state
file this reconciliation is useful to determine what action to take during the next apply
this command will not modify any infrastructure resource but it will modify the state file
as we saw earlier terraform refresh is also run automatically by commands such as terraform plan and terraform apply
and this is done prior to terraform generating an execution plan this can however be bypass by using the
dash refresh is equal to false option with the commands the terraform graph command is used to
create a visual representation of the dependencies in a terraform configuration or an execution plan
in this example the local file in our main.ta file has a dependency on the random pet resource
this command can be run as soon as you have the configuration file ready even before you have initialized the
configuration directory with terraform in it upon running the terraform graph command
you should see an output like this this text generated is hard to comprehend as it is but it is a graph
generated in a format called dot to make more sense of this graph we can pass it through a graph visualization
software such as graphiz and we can install it in ubuntu using apt like this
once installed we can pass the output of the terraform graph to the dot command which we installed using the graph face
package and generate a graphic like this we can now open this file via a browser and it should show a dependency graph
like this the root is the configuration directory where the configuration for this graph
is located we can see that there are two resources the local file called pet and the random
pet resource called my pet that makes use of the local and the random provider respectively
and finally we can see that the local file called pet depends on the random pet resource called mypet as we have
used a reference expression in the local file resource that points to the id of the random pet
that's it for this lecture now let's head over to the hands-on labs and explore working with the terraform
commands that we learned in this lecture in this section we will learn about the difference between mutable and immutable
infrastructure in one of the previous lectures we saw that when terraform updates a resource
such as updating the permissions of a local file it first destroys it and then recreates
it with the new permission so why does terraform do that to understand this let us make use of a
simple example consider an application server running nginx with the version of 1.17
when a new version of nginx is released we upgrade the software running on this web server
first from 1.17 to 1.18 and eventually when a new version 1.19 is released we upgraded the same way
from 1.18 to 1.19 this can be done using a number of different ways
one simple approach is to download the desired version of nginx and then use it to manually upgrade the software
on the web server during a maintenance window of course we can also make use of tools
such as addox scripts or configuration management tools such as ansible to achieve this
for high availability instead of relying on one web server we can have a pool of these web servers all running the same
software and code we would have to use the same software upgrade lifecycle for each of these
servers using the same approach that we used for the first web server this type of update is known as in place
update and this is because the underlying infrastructure remains the same but the
software and the configuration on these servers are changed as part of the update
and this here is an example of a mutable infrastructure updating software on a system can be a
complex task and in almost all cases there are bound to be a set of dependencies that have to be met before
an upgrade can be carried out successfully let us assume that web server 1 and 2
have every dependency met while we try to upgrade the version from 1.18 to 1.19 and as a result these two servers are
upgraded without any issues web server 3 on the other hand does not the upgrade fails on web server 3
because it has a few dependencies that are not met and as a result it remains at version 1.18.
the failure in upgrade could be because of a number of reasons such as network issues impacting the connectivity to the
software repository file system full or different version of operating system running on web server 3
as compared to the other two however the important thing here to note is that
we now have a pool of three web servers in which one of these servers is running a different version of software as
compared to the rest over time with multiple updates and changes to this pool of servers it is a
possibility that each of these servers vary from one another may be in software a configuration or
operating system etc this is known as a configuration drift for example after a few update windows
our three web servers could look like this web server 1 and 2 have nginx version of
1.19 and web server 3 has a version of 1.18 and all three web servers may also be
running slightly different versions of operating system on them this configuration drift can leave the
infrastructure in a complex state making it difficult to plan and carry out subsequent updates
troubleshooting issues would also be a difficult task as each server would behave slightly differently from the
other because of this configuration drift instead of updating the software
versions on the web servers we can spin up new web servers with the updated software version and then delete the old
web server so when we want to update nginx 1.17 to 1.18 a new server is provisioned with
1.18 version of nginx if the update goes through then the old web server is deleted this is known as
immutable infrastructure immutable means unchanged or something that you cannot change
as a consequence with immutable infrastructure we cannot carry out in-place updates of the resources
anymore this doesn't mean that updating web servers this way will not lead to
failures if the upgrade fails for any reason the old web server will be left intact and
the failed server will be removed as a result we do not leave much room for configuration drift to occur between
our servers ensuring that it is left in a simple easy to understand state and since we are working with
infrastructure as a code immutability makes it easier to version the infrastructure and to roll back and roll
forward between versions terraform as an infrastructure provisioning tool uses this approach
going back to our example updating the resource block for our local file resource and changing the
permission from 777 to 700 will result in the original file to be deleted and a new file to be created with the updated
permission by default teform destroys the resource first before creating a new one in its
place but what if we want the resource to be created first before the old one is
deleted or to ignore deletion completely how do we do that these can be done by making use of
lifecycle rules in our resource block and we'll see how to do that next in this section we will learn how to set
up lifecycle rules in terraform previously we saw that when terraform updates a resource it frees the
infrastructure to be immutable and first deletes the resource before creating a new one with the updated configuration
for example if we update the file permissions on our local file resource from 777 to 700
and then run terraform apply you would see that the older file is deleted first and then the new file is created
now this may not be a desirable approach in all cases and sometimes you may want the updated
version of the resource to be created first before the older one is deleted or you may not want the resource to be
deleted at all even if there was a change made in its local configuration this can be achieved in terraform by
making use of life cycle rules these rules make use of the same block syntax that we have seen many times so
far and they go directly inside the resource block whose behavior we want to change
the syntax of a resource block with the lifecycle rule looks like this inside the lifecycle block we add the
rule which we want terraform to adhere to while updating resources and one such argument or a rule is the
create before destroy rule here we have the same resource block that has been updated with the lifecycle
rule of create before destroy set to true this rule ensures that when a change in
configuration forces the resource to be recreated a new resource is created first before deleting the old one
now there would be cases where we do not want a resource to be destroyed for any reason
for this we can make use of the prevent destroy option when it is set to true terraform will
reject any changes that will result in the resource getting destroyed and display an error message like this
this is especially useful to prevent your resources from getting accidentally deleted
example a database resource such as mysql or postgresql may not be something that we want to delete once it's
provisioned one important thing to note here is that the resource can still be destroyed if
you make use of the terraform destroy command this rule will only prevent resource
deletion from changes that are made to the configuration and a subsequent apply the last argument type that we are going
to see here is the ignore changes rule this life cycle rule when applied will prevent a resource from being updated
based on a list of attributes that we define within the life cycle block to understand this better let's make use
of a sample ec2 instance which is a virtual machine on the aws cloud this ec2 instance is to be used as a web
server and can be created with a simple resource block like this don't worry if this resource block
and the arguments look unfamiliar we will cover it in much more detail in the ec2 section of the course
but for now please note that the resource called web server makes use of three arguments the
ami and the instance type are used to deploy a specific type of vm with a predefined specification
in this case the values that we have chosen deploys a ubuntu server with one cpu and one gb of ram
we are also making use of a tag called name which has a value of project a web
server using the tax argument which is of type map when we run terraform apply the ec2
instance is created as expected with the tag called name and a value of project a web server
now if changes are made to any of these arguments terraform will attempt to fix it during the next apply as expected
for example if we modify the tag called name and change its values from project a web server to say project b web server
either manually or using any other tool terraform will detect this change and it will attempt to change it back to what
it was originally which is project a web server in some rare cases we may actually want
the change in the name while any other method to be acceptable and we want to prevent terraform from reverting back to
the old tag to do this we can make use of the lifecycle block with ignore changes
argument like this the ignore changes argument accepts a list as indicated by the square brackets
and will accept any valid resource attribute in this particular case we have asked
terraform to ignore changes which are made to the tags attribute of the specific ec2 instance
if a change is made to the tags a subsequent terraform apply should now show that there are no changes to apply
the change made to the tags of a server outside of terraform is now completely ignored
and since it's a list we can update more elements like this you can also replace the list with the
all keyword and this is especially useful if you do not want the resource to be modified for
changes in any resource attributes we will learn more about lifecycle rules later in the course when we work with
aws resources but for now here is a quick summary of the three argument types that we have seen this lecture
and now let's head over to the hands-on labs and practice working with lifecycle rules in terraform
in this section we will take a look at data sources in terraform we know by now that terraform makes use
of configuration files along with the state file to provision infrastructure resources
but as we saw earlier terraform is just one of the infrastructure as a code tool that can be used for provisioning
infrastructure can be provisioned using other tools such as puppet cloud formation salt stack ansible etc
not to mention ad hoc scripts and manually provision infrastructure or even resources that are created by
terraform from another configuration directory for instance let us assume that a
database instance was provisioned manually in the aws cloud although terraform does not manage this
resource it can read attributes such as the database name host address or the db user and use it to provision an
application resource that is managed by terraform let's take a simpler example
we have a local file resource called pet created with the contents we love pets once this resource is provisioned the
file is created in the slash root directory and the information about this file is also stored in the terraform
state file now let's create a new file using a simple shell script like this
quite evidently this file is outside the control and management of terraform at this point in time
the local file resource that terraform is in charge of is pets.txt under the root directory and has no relationship
with the local file called docs.txt which is also created under the slash root directory
the docs.txt has a single line that says dogs are awesome we would like terraform to use this file
as a data source and use its data as contents of our existing file called pets.txt
if we want to make use of the attributes of this new file that is created by the batch script we can make use of data
sources data sources allow terraform to read attributes from resources which are
provisioned outside its control for example to read the attributes from the local file called docs.txt we can
define a data block within the configuration file like this as you may have noticed the data block is quite
similar to the resource block instead of the keyword called resource we define a data source block with the
keyword called data this is followed by the type of resource which we are trying to read
in this example it's a local file this can be any valid resource type for any provider supported by terraform
next comes the logical resource name into which the attributes for a resource will be read
within the block we have arguments just like we have in a normal resource block the data source block consists of
specific arguments for a data source and to know which argument is expected we can look up the provided documentation
and terraform registry for the local file data source we just have one argument that should be used
which is the file name to be read the data read from a data source is then available under the data object interra
form so to use this data in the resource call pet we could simply use
data.localfile.doc.content these details are of course available in the terraform documentation under data
sources within the documentation and under the attributes exported we can see that the
data source for a local file exposed two attributes the content and the base64 encoded version of the content
to distinguish between a resource and data sources let's do a quick comparison resources are created with the resource
block and data sources are created with the data block resources in terraform are used to
create update and destroy infrastructure whereas a data source is used to read information from a specific resource
regular resources are also called managed resources as it's an extension which is managed by terraform
data sources are also called data resources that's it for this lecture
now let's head over to the hands-on labs and practice working with data sources in terraform
in this lecture we will take a look at meta arguments in terraform until now we have been able to create
single resources such as a local file and a random pet resource using terraform
but what if you want to create multiple instances of the same resource
say three local files for example if you were using a shell script or some other programming language we could
create multiple files like this in this example we have created a bash script called
createfiles.sh which uses a for loop to create empty files inside the root directory
the files will be called pet followed by the range from 1 to 3. while we cannot use the same script as
it is within the resource block terraform offers several alternatives to achieve the same goal
these can be done by making use of specific meta arguments in terraform meta arguments can be used within any
resource block to change the behavior of resources we've already seen two types of meta
arguments already in this course that depends on for defining explicit dependency between resources and the
life cycle rules which define how the resources should be created updated and destroyed within terraform
now let us look at some more meta arguments specifically related to loops in terraform
in this lecture we will take a look at the for each meta argument and its uses in terraform
in the previous lecture we saw that when we use count the resources are created as a list
and this can have undesirable results when updating them one way to overcome this is to make use
of the for each argument instead of count like this
next to set the value of file name to each element in the list we can make use of the expression each dot value like
this however there's a catch if we run terraform plan now we will see an error
the for each argument only works with a map or a set in the variables.tf file we are
currently making use of a list containing string elements there are a couple of ways to fix this
either change the variable called file name to the type set in the variables lecture we learn that a
set is similar to a list but it cannot contain duplicate elements once we change the variable type and
then run terraform plan we should see that there are three files to be created
another way to fix this error while retaining the variable type as a list is to make use of another built-in function
this time we'll make use of the two set function which will convert the variable from a list to a set
once this is done terraform plan command should now work as expected now let us replicate the same scenario
as the one we did earlier with the count meta argument and delete the first element with the value slash root
bets.txt from the list when we run terraform plan now we can see that only one resource is set to be
destroyed the file with the name pets.txt the other resources will be untouched
to see how this is working let us create an output variable called pets to print the resource details like we did with
the example using count from the terraform output command we can now see that the resources are stored as
a map and not a list when we use for each instead of count the resources are created as a map and
order list this means that the resources are no longer identified by the index thereby
bypassing the issues that we saw when we use count they are now identified by the keys
which are file names slash root slash dot dot txt and slash root cache dot txt which are used by the for each argument
in the configuration file if we compare this to the earlier output that we got when we used count we can
see the difference in how the resources are created the count option created it as a list
and for each created it as a map there are a few other meta arguments in terraforms such as provisioners
providers back-ends etc we will see that later in this course now let's head over to the hands-on labs
and practice working with the for each meta-argument in terraform in this lecture we will see how to make
use of specific provider versions in terraform we saw earlier that providers use a
plugin based architecture and that most of the popular ones are available in the public terraform registry
without additional configuration the terraform init command downloads the latest version of the provider plugins
that are needed by the configuration files however this is not something that we
may desire every time the functionality of a provider plugin may vary drastically from one version to
another our terraform configuration may not work as expected when using a version
different than the one it was written in fortunately we can make sure that a specific version of the provider is used
by terraform when we run data terraform init command the instructions to use a specific
version of a provider is available in the provided documentation in the registry
for example if we look up the local provider within the registry the default version is 2.0.0
which is also the latest version as of this recording to use a different version click on the
version tab and it should open a drop down with all the older versions of the provider
let us select version 1.4.0 to use this specific version of the local provider click on the use provider
tab on the right this should open up the code block that we can copy and paste within our
configuration here we are making use of a new block called terraform which is used to
configure settings related to terraform itself to make use of a specific version of the
provider we need to make use of another block called required providers inside the terraform block
and inside the required providers block we can have multiple arguments for every provider that we want to use
in this example we have one argument with the key called local for the local provider
the value for this argument is an object with the source address of the provider and the exact version that we want to
install which in this case is 1.4.0 with the terraform block configured to
use the version 1.4.0 of the local provider when we run a terraform in it we should see a message like this
before we wrap up this lecture let us look at the syntax used to define version constraints in terraform
in the configuration for the local provider we have specified version equal to 1.4.0
this allows terraform to find and download this exact version of the local provider however there are other ways to
use the version constraint if we use the not equal to symbol instead terraform will ensure that this
specific version is not downloaded in this case we have specifically asked terraform to not use the version 2.0.0
so it downloads the previous version available which is 1.4.0 if you wanted a form to make use of a
version lesser than a given version we can do that by making use of comparison operators like this
and to make use of a version greater than a specific version we can make use of the greater than operator like this
we can also combine the comparison operators like this to make use of a specific version within a range
in this example we want to make use of any version greater than 1.2.0 but lesser than 2.0.0
but also not the version 1.4.0 specifically as a result terraform downloads the
version 1.3.0 which is acceptable in this case and finally we can also make use of
pessimistic constraint operators this is defined by making use of the tilde greater than symbol like this
this operator allows terraform to download the specific version or any available incremental version based on
the value we provided for example here we have given the value of 1.2 following the tilde greater than
symbol this means that terraform can either download the version 1.2 or incremental
versions such as 1.3 1.4 1.5 all the way up until 1.9 however if you look at the provider
documentation we do not have a version 1.5 or anything above the maximum version that we can make use
of in this case is 1.4.0 and this is the version that is downloaded when we run terraform in it
let us make use of another value this time 1.2.0 with the same pessimistic constraint operator
this time terraform can download the version 1.2.0 or the version 1.2.1 or the version
1.2.2 all the way up until 1.2.9 again we only have a maximum version of 1.2.2 in the registry and that's the
version that will be downloaded when we run terform in it well that's all for now at code cloud we
have multiple learning paths curated just for you to help you to go from a beginner to an expert in various devops
technologies so don't forget to check it out and don't forget to subscribe to this
channel as we publish more courses very often [Music]
you
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries

Unlocking the Power of Go: A Comprehensive Programming Course for Beginners
Learn Go programming with our comprehensive course for beginners. Master the fundamentals and build real-world projects!

Docker for Beginners: A Comprehensive Guide to Containerization
Learn Docker with hands-on labs, concepts, and advanced orchestration tools like Kubernetes.

Introduction to Linux: A Comprehensive Guide for Beginners
Learn essential Linux skills for system administration and more with this comprehensive course guide.

Unlocking Azure DevOps with the A400 Certification: A Comprehensive Guide
Master Azure DevOps with our A400 Certification guide. Learn key topics, strategies, and tips for success.

Ultimate Guide to Azure DevOps Certification Course: Pass the Exam with Confidence
Join Andrew Brown's free DevOps certification course and learn everything you need to know to pass the Azure A400 certification exam!
Most Viewed Summaries

Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.

A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.

How to Use ChatGPT to Summarize YouTube Videos Efficiently
Learn how to summarize YouTube videos with ChatGPT in just a few simple steps.

Ultimate Guide to Installing Forge UI and Flowing with Flux Models
Learn how to install Forge UI and explore various Flux models efficiently in this detailed guide.

How to Install and Configure Forge: A New Stable Diffusion Web UI
Learn to install and configure the new Forge web UI for Stable Diffusion, with tips on models and settings.