Overview of Cloud Computing
- Definition of cloud computing as the delivery of computing services over the internet.
- Importance of cloud computing in modern technology and its growing demand in the job market. For a deeper understanding, check out the summary of 100 Most Important MCQs on Cloud Computing by Neptel.
Real-World Applications of Cloud Computing
- Examples of cloud computing in daily life: Google Drive, Netflix, etc.
- Benefits of using cloud services for businesses, including cost-effectiveness and scalability.
Case Study: Ravi's E-commerce Project
- Ravi's challenges with local infrastructure and how cloud computing provides solutions.
- Comparison of on-premise vs. cloud computing for storage, compute power, networking, testing, DevOps, and scalability.
Cloud Deployment Models
- Public Cloud: Shared infrastructure for general use.
- Private Cloud: Exclusive infrastructure for a single organization.
- Hybrid Cloud: Combination of public and private clouds for flexibility.
AWS Services
- Introduction to AWS and its key services like EC2 and S3.
- Explanation of EC2 instances and their configurations.
- Overview of S3 buckets and their role in data storage. For more on AWS management, refer to Understanding User Profiles, Roles, Permissions, and Access Control in Genesis Cloud.
Security and Access Management
- Importance of IAM (Identity and Access Management) in cloud security.
- Explanation of roles, policies, and permissions in AWS. To learn more about managing infrastructure as code, see Mastering Terraform: A Comprehensive Guide to Infrastructure as Code.
Hands-On Demonstration
- Step-by-step guide on creating EC2 instances and S3 buckets.
- Instructions on setting up security groups and IAM roles.
- Demonstration of uploading files to S3 and managing access permissions. For a practical approach to containerization, check out Docker Tutorial: Comprehensive Guide from Basics to Advanced Concepts.
Conclusion
- Recap of the benefits of cloud computing and its relevance in today's tech landscape.
- Encouragement to explore cloud computing as a career path with available resources and courses.
Ever had that moment when your phone pops up, storage almost full, and you're like, "Oh, not again." So, what do you
do? You start uploading photos to Google Drive or Google Photos to clear up space, right? Or let's say you're
watching a movie on Netflix or Prime Video. No downloads, no waiting time, and everything just streams smoothly
even in HD. Have you ever wondered how all that works so seamlessly? Well, the answer behind all these is something
called as cloud computing. Cloud computing is the delivery of computing services like storage, servers and
software over the internet instead of your local device. You're not really storing the movies or those photos on
your phone. They are actually sitting on powerful computers somewhere far away and you access them through the
internet. It's like renting space on someone else computer to store your data or run your app. And guess what? In
2025, there is already a huge demand for this technology. Almost 90% of companies in the world consider themselves cloud
first and thus there are lacks of open cloud computing jobs available but the supply of skilled cloud engineers is
still very limited. This course is designed to get you skills and framework to crack cloud computing jobs and
achieve your career's objective. We will take you from the very basics of cloud computing to work on AWS hands-on. So if
you're a student, job seeker or IT professional interested in cloud, don't skip. Watch this video till the end and
take your first step into the future of tech right here on Intellipad's YouTube channel for absolutely free. Let's start
by understanding cloud computing in simple m. Meet Ravi. He's the founder of a small software company based in
Bangalore. His team builds web app for clients. They have just landed a big project, an e-commerce site for a global
site. This sounds very exciting, right? But also scary because Ravi's local infrastructure is very limited. He
doesn't have big servers, high-end GPUs or a fancy data center. And building one would take lacks of rupees, months of
setup and constant maintenance. And still there would be risk against cyber attacks, natural disaster which can whip
around losses for Ravi's company. So what can Ravi do at this moment? He came to know about cloud computing. So what
is this cloud computing? Is it like data center sitting up in the clouds? No. In simple words, cloud computing means you
don't need to own the physical hardware. You rent thing like storage, server and networking over the internet and you
only pay for what you use. And since all these resources sit virtually in the internet and can be accessed with the
use of internet, this way of accessing resources is termed as cloud computing. So simply put, computing that happens
over the internet. Let's break down how cloud computing can help Ravi with his e-commerce project. We will compare what
he would need to do for an on-promise setup versus what can he do on the cloud in any software development project.
Even before Ravi's team write a single line of code for the mobile app, they need to set up a few essential. For
example, a server to run the back end, storage for user data, images and transaction, a secure way to connect
components, a place to test features safely, tools for automatic deployment, and daily backups for recovery, and the
ability to handle sudden spikes in the user. So, let's take these attributes one by one and see how cloud computing
makes Ravi's job 10 time easier than going with physical onremise infrastructure. So first up is storage.
This is where all app data lives. Ravi's app lets user upload profile picture, store order history, save payment
records, maintain session data. So if all this has to be stored on premise, he would need to buy and configure physical
servers, attach hard drives, set up cooling system, and run backup routines all while praying the business doesn't
have a power outage. With cloud computing, Ravi just configures cloud storage units such as AWS S3, and that's
it. It's encrypted, backed up, and instantly available. Next up is compute power. The brain behind the app. So
whenever a user logs in, updates their card or stream a product video, Ravi's backend has to process those action
fast. If it has to be onromise, Ravi would have to buy physical server, install operating system, keep them on
24/7 and upgrade them whenever traffic increases, meaning he will have to buy more hardware. On the contrary, with
cloud computing, Ravi spins up virtual servers on demand using AWS EC2. He pays only what he uses and when the traffic
tips, he shuts them with zero waste. Third on the list is networking. Ravi's apps front end need to securely talk to
the back end using APIs. So if Ravi were to implement network security on onremise setup, he would have to
manually set up routers, install firewalls, open only required ports, and hire someone to monitor the threat. On
the cloud, he just have to configure a VPC, a secure isolated network where only specific services can communicate
with each other. The traffic is encrypted, everything is logged. So Ravi don't have to worry about the threats.
Next up is testing. Let's consider that Ravi has built an app and it's live. But his client wants him to add new refer
friend feature. To do this on premise, Ravi would need to replicate the entire production environment. That means more
servers, cloning the database, increasing the cost, plus the risk of accidental breakoffs. But when done on
the cloud, Ravi could just create a staging server with a click. This new feature is tested without touching the
live app. Once it is approved, it goes live instantly with new update. No drama at all. Fifth point is DevOps and
automation. So let's consider that Ravy's development team commit code every single day. For the onremise
setup, every change would mean someone like specialized DevOps engineer sitting there and doing manually builds, test,
bash scripting and deploying the app which is very slow and then again errorprone. But when relying on the
cloud, Ravi simply integrates GitHub plus AWS code pipeline. Now every code push is auto tested, auto deployed,
monitored in real time. His team focuses on coding, not babysitting the servers. Sixth point is scalability. One day this
e-commerce app goes viral. Suddenly 5,000 user becomes one lakh. If this happens in onprim, there is possibility
that Ravi's fit server might choke leading to app breakdown and clients disappointment. But when things are
worked on the cloud, Ravi could just enable autoscaling. So when the traffic rises, more servers spin in
automatically. When it falls, they shut down to save the cost. Smooth experience for customer. No panic for Ravi and the
happy client. So you see cloud computing in every step proved to be better than that of on premises. That's why almost
96% companies today call themselves cloud first and are using cloud services one way or the other. Building an app
like e-commerce without cloud computing today is like trying to run a race barefoot on broken glass. Cloud
computing is cost effective since it uses pay as you go model and provide services with just clicks whenever you
need. It's fast, it's flexible and is the backbone of modern software development. Moving on, let me walk you
through different deployment models cloud computing providers like AWS, Azure or GCP offer. First up is public
cloud. You can consider it like taking a bus. You don't own it but you share it with others. Public cloud like AWS,
Azure, GCP provide infrastructure over the internet for anyone to use. Most startups and dev prefer this. It's
affordable and easy to get started. Then we have a private cloud. Now think of a private car built and used by one
company only. Private clouds are used by big enterprise needing full control customer compliance or specific security
setup. More expensive but fully owned and operated by one company. Finally, we have the hybrid cloud. A mix of both
public and private cloud. You use public cloud for general stuff, private for sensitive workloads. This is flexible,
cost effective and popular among growing companies. So there you have it. You just understood different type of
deployment models, how cloud computing is better than on-premise setup. You discovered how it's cost effective and
easy to access and manage. You won't wonder now about why Ravi choose cloud computing. By the way, if you're
interested in pursuing cloud computing or DevOps as a career option, we can help you learn more with our strong
curriculumdriven course. You can check out the link in the description to know more. So what is cloud computing? Is it
something like computing over the clouds? Well, not exactly. Cloud computing simply means that you're using
the computing resources, be it your storage, servers or your applications that are already present over the
internet, but you are accessing it all without using a physical hardware like you don't need a separate hardware to
like access your Netflix or YouTube or whatever. You can access it from anywhere and from any device. You just
need your login ID and password. So that is what cloud in action is. Now let's just understand it with a simple life
example. For suppose you have thousands of photos and videos to store somewhere. What you can do is you can either
purchase a hard drive and store it there. But what if the hard drive get crashed or get lost and you have to
carry the hard drive all over with you to access those photos and videos. Now what you can do is you can just put all
those photos and videos into a Google drive that might be already present in your phone. Now what is uh it makes
everything so simple that you can access it from anywhere. Now now you have no worries like if it get lost or you need
some other device for it. So that is cloud in action. So cloud computing simply means that you're accessing all
those computing resources be it your storage or your server databases or apps without owning a physical hardware which
means you are just having the internet and you can access all those things in your laptop or in your device whatever
you are using. Let's take a quick review like where you are already using cloud in your day-to-day life without even
realizing it. First suppose either you are using Google Drive to store your photos online or you're using Gmail to
send or receive emails that are all done on the hosted Google servers or you're using Netflix or YouTube to stream the
daily videos whatever you want or the films that you want to have it all are already present on the servers. You just
have to access it from your devices or you're using Instagram to have like sharing the photos or scrolling the res
all are done using the cloud computing technology. Similarly, we have Zoom video calls to handle via the cloud
service. Now that we have understood that what cloud actually is and with a real life example too. Let's just
understand why do we actually need cloud. First suppose you are going into a new city and you want to roam around
the city, travel around it. So you have two options. Either you can rent a car or you can own a car. But what are the
disadvantages that come after owning a car is if you buy a car you have to pay a lot of upfront fees of course to buy a
car and then you have to pay for the insurance for the maintenance for the fuel and for the repairs that is the
second disadvantage that we have and the very important thing that you might not even use it everyday basis and but you
are still paying for it when it is not in use. Now with a real life example only we can see that if you are not
using it regularly you can just use a Uber instead like you can rent those services. What you have in this you have
no maintenance no upfront cost you only pay when you use it and you can upgrade to a better version whenever you want.
So that is the advantages that we have in the cloud also. What you do you don't need to buy those expensive servers and
manage them. You just need to rent those computing resources whenever you need and stop paying what you don't. So that
is just like Uber. It's flexible, cost efficient and stress-free. So now that we know that how much easier our life is
made using the cloud technology, I would like to tell you that back in the day we didn't have such technology. We used to
sit in a room filled with physical servers where we have to manage each of the physical server even when they are
not in use. And we have few disadvantages along with it. But that's not the point that we exactly jumped
from the traditional IT services to this uh cloud computing. We have one more important thing that is called
virtualization which is a very important topic in our cloud computing syllabus. So let's understand each one of them
step by step and how we came to the cloud computing that is our today's world scenario. For explaining you guys
what is traditional IT or what are the physical servers that we had back in the day. You can just imagine yourself
sitting in a room filled with physical servers and you have to manage each one of them individually. So what were the
problems that we had back in the day was you have to pay a high upfront cost because yes there are so many physical
servers that are present in your room. So of course you have to pay a very hefty amount for that and we have second
disadvantage that is wasted resources. Of course when you're using so many hardwares and some of them are not even
in your use. So what is this? This is the complete wastage of the resources and we have manual maintenance that you
have to manage each one of the servers and you have to take a maintenance for that. So you have a lot of charges and
you have to pay again a lot of hefty amount for that and again it was hard to scale like if you want to manage those
servers from another room you have to carry each one of them individually to another room or to another place. So it
was very hard to scale such technology. So to eradicate such problems that we had in traditional IT service what we
came up with we came up with something called virtualization. Now let's understand what virtualization was. See
virtualization you can understand it. It's like a smart hardware use. Why? Because since back in the day you had
physical servers each one of them having its different OS and you have to have one individual server for accessing one
operating system. Now that you had virtualization, you can split one physical hardware into many virtual
ones. Now let's understand it what does it mean uh using a simple diagram. If you can see this diagram, you will be
able to understand that what is actually happening inside the virtualization. First of all, we have a physical
hardware. Be it your CPU memory or your storage or network, whatever you have, this is a physical hardware that you
have. And just above that there is a virtualization layer in which we have our hypervisor. Now this is the
component that is actually used to split those physical hardware into the virtual ones where you can operate each virtual
system individually just like mentioned here. You can understand it by a simple example. For example, if you are working
in a tech domain, you understand that you are having one laptop and you're running an operating system called
Windows there. But you can also have a similar operating system that is Linux into it. Why? Because that is where
virtualization actually works. Like you can have two different operating system into one device only and you can access
both of them. So that is what a real life example is of virtualization. So let's understand that virtualization is
the process of creating a virtual version of server, desktop, storage, device or even an entire operating
system. It allows one physical system to run multiple virtual machines each behaving as if it was a separate system.
When you are using Windows and Linux both of them in a single device, what's happening? You can use both of them
individually as both of them have different software dependencies but the hardware is able to manage it. Why? All
because of the virtualization that came into the existence. Now the benefits that we had was of course there was a
better resource utilization because we have to have just one physical hardware to run so many operating systems on it.
It was easily manageable as simple as that and it was faster setup than the physical servers of course like to be
filled in a room with the different physical servers each having different maintenance and different type of
operating systems. You can just have a one physical server and maintenance is quite low in this. Now that we had
virtualization back in the day, there was still one more problem that we had that we still needed one physical
hardware to run those virtual ones. So what came into the technology was something called cloud computing that we
are working. Cloud computing simply means anywhere anytime and pay as you go. Why is that? Because for suppose
you're using AWS, Azure and GCP. Some of you might be well aware about it. So when you're using that technology what
you are doing you are just renting those servers or renting those computing powers all over the internet just for
your own purpose and in AWS when you're creating the instances that we will be doing in the videos further what we have
to do we just have to make an instant and we just have to pay the charges as much as we are using it. So that is
cloud but was it was like pay as you go like whatever the cost efficiency you had whatever the budget you had you can
use the computing resources according to that and anywhere and anytime of course you just need a ID password for it and
any device would work any device would be compatible to use it now that we have looked into the picture deeply let's
just compare each of the services step by step like how we went from ownership of like all the physical servers that we
have then to sharing the virtual servers when we add virtualization and how did we come to the complete rental system in
the cloud. So what it looked like back in the day was you have to buy physical servers for the traditional IT servers
of course that we have discussed till now. It was expensive and hard to scale at the same time. So to eradicate this
problem we came up with one device split into multiple virtual ones that was virtualization. You have to run multiple
virtual machines on a single hardware. And what was a limitation? The only limitation that it had was it was quite
flexible at the time of the day. But what was the simple thing that we had still had in the problem was that you
still needed a hardware for it. Like one hardware was still needed for it. So we came up with a simpler thing like the
very simplest thing that you don't even need a physical hardware. Now you can just rent the things and access it as
much as you want. So what is cloud? You just have to rent the services from the cloud providers that we have and you can
use it as much as you want according to your efficient bud budget. So it was most efficient and the modern approach
is cloud only. Now that we have understood like what cloud computing is and why is it so important and how we
moved from owning the complete infrastructure to sharing through virtualization and now renting it using
the cloud computing. So let's dig into what are the core characteristics of the cloud and what makes it so powerful and
so popular among us. You can use it a simple acronym called command that can be used to remember like what all the
characteristics and what all the resources that we have in cloud computing. First of all we have O that
stands for ondemand service. Now what does an ondemand service means? You can just imagine it using something like
Zumato. Like whenever you feel like you want to order something, you just launch the app from your phone. So it's the
simple as that. You just order a food from your Zomato app. So you can launch the server whenever you want. So it's a
on demand self-service criteria that you have. Next we have something called broader access. Now what does broad
network access actually means? See when you are using an Instagram, it's not important that you need to carry your
own device. You can access it from the any device that you want. You just need a mail id and your password. That's what
I'm saying from the beginning itself that the broad network access means that you can upload the files or you can
access the files. You can scroll the re whatever you want all from any device. You just need your identity as your
username and password. Now the next thing that we have that is called resource pooling. What does it mean?
That the cloud provider the groups like shares the storage and memory and CPU among the many users. You can just
understand it like the electricity broadband is shared among lot of people we have in the colony or we have in a
particular home right. So that is what resource pooling means. Next we have elasticity. Elasticity simply means that
it can expand or shrink as a balloon on its own. For suppose many of us are using the simpler server or the single
server that we are uh using. So what happens there is caused a lot of traffic inside it. Okay. So what this cloud does
it automatically add some necessary servers that are used there and it can also just after the traffic is removed
it can automatically just remove the servers when they are not in use. So this is a very important and very
impressive quality that we have in the cloud. Next we have pay as you go for suppose you are just using a 10 GB
memory. So you're paying for that 10 GB only. It's not like you're paying for 100 GB and you're just using 10 GB. So
that's what pay as you go means that you only pay for whatever the resources you want and for how much time you want. Now
that we have understood this let's just example it with a simple real life analogy that what are these things and
how we are using it in our day-to-day life. First of all we have on demand self-service. I just said that you can
use it like ordering an app from Zumato or Swiggy whatever you want. You just uh have on demand like whenever you have
the demand you have that kind of service in front of you. So that is on demand service characteristic and the real life
analogy is ordering a food from the app. Now we talk about broad network access just like Instagram you have something
called Netflix like everyone offers uh like stream on the Netflix platforms. So what happens you can access it from your
TV also and from your device also whatever wherever you are comfortable with. So that is what broad network
access means. Now elasticity just example it or just think a balloon shrinking or expanding on its own
according to your preferences. That's what elasticity means. And when we have resource pooling, it's just like sharing
electricity among the colony or among your home or it's just like sharing a hotel rooms. Like we have lots of people
using the same resources. So we kind of do something called resource pooling. Just like a car pooling like many of you
are going into the similar direction. So just resource pool it. And the next thing that we have called measure
service simply like pay as you go like you pay the electricity or water bill just like as much as you have used. So
simply whenever we are creating we will be creating a EC2 instance next in the video. So wherever we are creating it we
are just paying for the only duration of time where we are using it. So that is another example or the characteristic
that we have for it. So now that we have understood the core characteristics also and what are the all the benefits that
comes with cloud computing. Let's just understand what are the cloud services models that we have and next the what
are the cloud deployment models that we have and the key service providers that we have in cloud computing. Now what are
the cloud service models are they're simply categorized among what they have to offer to the customers or the people
that are using it. And when we talk about the cloud deployment models it just simply categorize itself like how
the cloud models are deployed over the internet. And the service providers are very famous your AWS, Azure and GCP that
are quite popular among all of us. So just start with this next section that is cloud service models. Now that I have
said that cloud service models are categorized among what they offer to you guys, to me and to whosoever is using
those technology. So the there are three types of and three main types of cloud service models that are IAS, PAS and
SAS. Now what does it means? IA is being infrastructure as a service, PA being platform as a service and SAS being
software as a service. Now let's just understand all these type of service models with the help of simple example.
We have a popular term called pizza analogy for understanding all these three terms. Okay. So what we have first
of all we have IAS that is infrastructure as a service. Now for suppose you want to have a pizza and you
want to customize it according to yourself. But for now when we are using IAS infrastructure as a service we need
a entire infrastructure and we need to rent a entire infrastructure. So what does it mean that you are renting an
entire kitchen all the utensils all the things that are present in the pantry and all the things you are utilizing and
you're renting everything like you're renting an entire infrastructure for you to make a simple pizza. In PAS what
you're doing you just saying that I don't need a entire infrastructure now I just need a platform. Now you have
popular outlets of pizza like Domino's and Pizza Hut. So what you do, you just go inside it and you just say that I
want a pizza to be made for myself and I can make it myself. I just need the dough and the toppings with it whatever
you want. Okay. So the platform is completely ready for you and you just need to access it for the pizza or
whatever the services in the computing world you need. Now when we talk about SAS that is very popular and every one
of us are already using it. What does SAS means? that the entire software is ready for you like the pizza is already
ready and you just want to order it by just a call or just using a zumato or swiggy whatever you want to have. So
what do we understand by software as a service that the app or the pizza is entirely ready. Okay. So these were the
things that we had for the cloud service models and let's just start understanding each one of them step by
step by what you get in a particular service and how you handle that and what are the real life examples that we have
talking about the is that is infrastructure as a service it's the kind of a basic building block that we
can say you have the server or storage or networking all you want right in your hand what you get in it you kind of get
a virtual machine like EC2 Azure VM that we'll be discussing in the video further and you have to get a storage like we
have examples of AWS S3 Azure blob that are quite popular among it and the networking. So these are the things that
you get when you're renting an entire infrastructure. Now what do you handle in it? Since you have the entire
infrastructure, you have the entire ownership but you have a lot of work to do also. So first of all what you
handle? You have to handle the entire OS installation that you have to do in the setup. Then you have to do the app setup
and the security patches that whatever you want to have like you have the full access and full control of the entire
infrastructure but at the same time you have to handle a lot of things in it. And what are the real life examples that
we have? We have an EC2 instance of AWS that is a very popular cloud provider. Then we have Microsoft Azure VM that is
a virtual machine. Then we have Google Cloud Compute Engine. Now all these things are kind of a real world examples
that you will be dealing with when you are diving into the cloud technology into deeper. Okay. So the next thing
that we have to discuss in detail is called platform as a service. Now simply as I told you uh what do you mean by
platform as a service is you just have the entire platform ready for you and you just don't need the system set up
but you have to like write the code, deploy the code, you have to make the pizza by yourself. Correct? So that is
what platform as a service is. It is directly made for the developers who want to just focus on their app and not
the system setup. So what you get in it and entirely like installed operating system, pre-installed frameworks and
libraries, runtime environment and autoscaling. Now what does all this means that the entire infrastructure is
already ready in this in IAS you have to like do the OS installation and different type of like you have to set
the environment all by yourself. But not in this in this you have the platform already ready and what you have to
handle it. It's just writing and deploying your code like just you have to make the pizza and take care of the
toppings that you want to have. So what are the real life examples that we have in this is Google app engine, Heroku,
Azure app service and AWS elastic beanto. Now all these are the real life examples again in the technical domain.
If you're working with key cloud providers you will be able to understand this or if not then after some time of
course you will be able to understand this as we go deeper into this conversation. Now that we have we have
software as a service that is our last and the third most important key service model. What we have in SAS it's a
complete ready software ready to use software. You don't have to do anything in this. It's a complete software ready.
No installation and no maintenance needed. You can just assume it like you're using Gmail or Zoom or like Canva
or Instagram whatever the apps you're using. All these comes under the SAS model. Why? Because it's a readyto use
software. You just have to access it. You just need to have a internet link to it and then you can use it anywhere.
What you handle in this of course just using the app via the browser or the application. So the real life examples
that we have are again Gmail, Zoom, Canva, whatever the apps that are already inbuilt in your device and you
are using it directly. Those are the apps are called software as a service. So these were the examples and these
were the features that we had in the cloud service models. So let's just compare each one of them individually.
Right here when we talk about the features that we have in the three service models, you can know that the
control that you have is highest in infrastructure because you have the entire infrastructure in your control.
But when we talk about platform, it's a little bit medium and it's very lowest in SAS because the entire software is
being ready only for suppose you are ordering a pizza, you cannot customize it now according to your preference
because the pizza are already listed in the menu and you have to order it from there only. So nothing more sort of is
in your control. When we talk about the setup time, of course is takes a lot of setup time because you have to like do
it manually by yourself. You are setting up an entire infrastructure there. So the setup time is quite long in PS it's
quite medium and it is you don't have to have any kind of setup in the SS you just need to have your device to access
those softwares. Now user type IAS is basically uh inclined towards system admins and devops and PS according to
developers because you are writing and deploying the code there and SAS is used by everyone end users means like you me
everyone whosoever is using the app it's a globally recognized app when we call about examples again the same examples
AWS EC2 instance for the IAS Heroku and app engine for the PS and Gmail zoom canva instagram Netflix whatever we have
in the SAS is so the basic difference is SAS is a readyto-use software just like your Instagram, Netflix or whatever you
have. Then we have PS it's like a developers playground like you can write and deploy code all in the platform that
is available to you and is your own cloud infrastructure like you have the entire infrastructure in your control
and whatever you want to do with it and however you want to get the control of it. So it's quite in your control. Now
that we have understood the key service models, let's just dig into the cloud deployment models like how are they
actually made available to the users like how is the cloud technology made available to the users of it. So let's
just understand it with the three basic things that we have. One is called public cloud then we have private cloud
and then we have something called hybrid cloud. Now let's just take a brief example like what is public cloud?
Public cloud is like renting somewhere like renting an apartment or renting a house for someone because the owner is
not you. You're just renting it from somewhere and it is quite public for everyone because anyone can come and go
in that building right for suppose you're living in a society you're just renting an apartment in it. So it's
quite public for everyone. Now what is private? Private cloud means your own personal space like you are owning a
house, you have owned a house, you have purchased a house, it's your private property and no one can come and go
without your access, without your permission. So that is what private cloud is. Now hybrid cloud is very
interesting among these deployment models. Why? Because hybrid cloud uses the best of the both worlds like you can
have your private details in your private section in your private cloud and when you have like different things
like your name or age whatever you want to have like whatever you want to give the access to your different people you
can have it in the public cloud. So that is how hybrid cloud uses the best of both the technologies that is public and
private. Now let us understand each one of them in detail using the real life analogies and for you to get more
advantages of the features and the examples that we have. So you can see right here on the screen that public
cloud you know the services are provided over the internet by companies like AWS Azure of course the companies are Amazon
and Microsoft or the Google cloud. So anyone can sign up and use this resources like if you talk about
particularly AWS anyone can access it correct because it's like globally uh recognized and globally distributed. So
you can access it I can access it on the same platform itself on the same login page. So what are the features that we
have? It is a shared infrastructure because you are you are working on the same platform that I'm working on and it
is cost effective because again whenever you create an EC2 instance that we will be creating in the video further
whenever you are using it you just have to pay for the only amount of time where you are using it like for the duration
of time whenever you are accessing the EC2 instance or you're doing something in that. Next we have fully managed by
the provider. Now AWS is not managed by you and me. We are just the end users of it. the manager or the provider that is
Amazon that is the kind of the owner of this uh service. So it is completely managed by Amazon and the AWS admin.
Next we have easy to scale globally of course because since one person has created everything and it is shared
among all the groups all the people who are keen to use the technology. So it is easy to scale globally. Now the examples
are hosting websites or we have storing files like your Google Drive because of course Google Drive is also used by you,
me and everyone. It is the same app that is being used by each of the people and the running applications like Netflix
and AWS. Now what happens in Netflix and AWS also just understand it with Netflix if you want to have a common real life
analogy that you are having the same Netflix I'm having the same Netflix you will be getting the same recommendations
I will be getting but what happens your source of interest and my source of interest is quite different but what we
are doing we are sharing a public platform for our interest so that is what public cloud is now let's move
towards the private cloud that we have in action private cloud is like having your own gated data center of course
either on premises or hosted third party whatever but dedicated to your particular organization or dedicated to
you. Now whosoever use public clouds are like basically banks or governments or hospitals that you know run these type
of uh services that are quite confidential for everyone. So what are the features that you have in this is
you have more secure and customizable feature like because private cloud is being used by a private organization who
have all the data of it. So it is quite secure and customizable both. When I talk about used by highly regulated
industries then again it cannot be used like it can be used by a particular person but the key providers or the key
users of it are the banks and the governments who have some sort of private details of the entire
citizenships whosoever is having it. Okay. So next we have high setup cost and maintenance. Of course because if
you are having something private to yourself or to a organization of course it will cost little bit higher because
you are having more gateways and more security rather than public and hybrid crowd. And the next thing not shared
with others. It won't be shared with others until unless we have your permission or the provider has your
permission has your accessibility to give it to others. Next we have the examples. Of course, bank running the
core financial systems or government running managing the citizenship data or we have hospitals storing the medical
records. All of this organization you can see that are very private to their own citizens data or own customers data.
So that is how private cloud is used among the organizations. Now we have the very interesting section that is called
the hybrid cloud and you will be very amazed to know that yes you are quite familiar with this cloud provider. So
what this model does it is a blend of both private and public. Now what does it mean that it can allow data to move
between the private and public cloud again and again. So what are the features that we have in this? It uses
the best of both worlds like the storing of the information that is sensitive you can store it in private and this
something that can be accessible by everyone you can store it in public. Next we have flexibility and
scalability. Using the best of the both words give you the flexibility and scalability to use the both of the
systems accordingly and of course it is used for disaster recovery or traffic spikes. Now what do we understand by
this? Suppose you have a lot of data inside you like uh for a hospital also we have a lot of patients inside. So
what do we do? We just store the sensitive information in the private cloud and that can be accessed by
everyone like the appointment details or the appointment page it can be accessed by everyone. So we keep it in the public
cloud. Now take it an example uh an e-commerce website uses the private details of customers like their names or
whatever the orders they have purchased into their private cloud but the product browsing is similar for you and me
whosoever is using it for suppose we are surfing Myntra or Amazon the products that will be displayed there will be
same for you also for me also but the information that is quite private to each one of us will be shared in the
private cloud so similarly healthcare providers using private cloud for patient data and public cloud for
appointmentuling. That is how simple it is. So these were the cloud deployment models that we have in cloud public
private and again the interesting one hybrid cloud that uses the best of both worlds. Now since we have understood it
by a real life analogy let's just understand it again that in public cloud you are renting an apartment in private
cloud you having your own house so you can have the private data to yourself and in hybrid cloud is like owning a
house and you can do a Airbnb or you can book a hotel whenever it is needed. Okay. So who uses the public cloud?
Startups and public apps. Okay. Correct. And then we have private cloud. It is used by bank and government and
enterprises. Whosoever needs the private data to theirelves. And again the hybrid cloud can be used by the similar thing
like the healthcare services or the like e-commerce websites to store the patient data and the customer data alone and the
data for the browsing or for the appointmentuling in the public cloud. So these were the things that we have
discussed till now and I hope that cloud isn't feeling so cloudy anymore to you guys. So after this we are now moving to
the very important section that will be our key cloud providers. Now what are the key cloud providers actually means
that it is a third party company that is allowing all of us to use the cloud services that are be AWS from Amazon or
Azure from Microsoft or GCP from the Google. So these are the very important cloud providers that we have and we will
be discussing each one of them in detail step by step. Now the key cloud provider is just nothing that delivers the
cloud-based infrastructure or platform or software to the people who are ready to willing to pay as you go. Okay. and
you know for the organizations to scale scale their resources or without maintaining the physical hardware
because that's the key feature in cloud that we have now from Netflix to NASA each of us are using the AWS and Azure
and GCP for different sort of purposes so let's discuss each one of them one by one so starting with the very popular
service that we have called AWS that is Amazon web services now according to the convenience of the providers whatever
the user needs it provides IAS EAS SAS as allin-one. Now what we have it's a comprehensive yet evolving cloud
platform that is provided by Amazon that provides again IAS PA and SAS according to the individual's need or the
organization need. Now the very popular service that we'll be working on first of all that is called EC2 that simply
stands for elastic cloud compute. Now what does it mean? It's just imagine itself uh like a virtual server provided
by the Amazon. So you rent those virtual servers like IAS you get the entire infrastructure you get an a hosted
virtual server where you can perform all the task whether be your devops or cloud related things. So the very first
service that we have is called EC2. Now similarly we have different sort of uh other files that we have to we know we
need to store the files in that or we need to run the code without the servers. So we have S3 to store the data
files in the cloud and we have lambda to run the cloud serverless. Of course, you don't need any server in that. Then we
have RDS to manage the relational databases and we have CloudFront for fast global content delivery. Now, all
these services are very much used by many of the organizations and many of the individuals starting with you can
see here that it is popularly used by Netflix, NASA, Twitch and Airbnb. So these were the customers that we have
for the AWS services. Similarly just as the AWS we have another platform that is called Ashure that is provided by
Microsoft. Now again it is a similar kind of cloud computing platform that offers the solution for building,
testing and deploying the websites or the apps that you are creating. Now the popular service in this is Azure virtual
machines. Uh we will see this demo in the class further. Next we have Azure blob storage to store the large amount
of datas unstructured data mainly and we have Azure functions for event driver again serverless computing. Then we have
Azure SQL databases for fully managed databases of SQL and we have Azure DevOps for the CI/CD pipelines and the
project manage managements whatever you need to do related to DevOps technology. And again the use cases we have is uh
the BMV or HSBC Adobe and the US government it's written here for the instance purpose that these are all the
customers and the organizations that uses Microsoft Assure as their primary cloud provider. Now the next and the
very important and very interesting platform that we have is GCP provided by the cloud. Now what does GCP does? It
provides a similar infrastructure to the cloud users to the users of uh its customers that it uses for itself also
like for its own products for deploying the own products they use the same kind of platform that they are providing and
the popular services that we have is compute engine to launch the virtual servers just like IAS NPS or SAS and we
have cloud storage like it's highly durable and scalable storage and we have bigquery fire store and vertx AI for the
different type of technologies like pabyte scale analytics engine or realtime NoSQL database for web mobile
and to build and deploy machine learning models we have vertex AI. So if you're getting confused in all these terms for
right now don't be panicked because this is just the fundamental video for now but when you will be learning every
technology this cloud technology in depth you'll be understanding each of them very clearly but for now we'll be
demonstrating just two types of service models that are AWS EC2 and virtual machines in Azure. For now the cloud
platform that have been using uh the organizations are Spotify, PayPal, Twitter and Snapchat. So these are some
of the customers that GCP has. These are the like loyal customers of GCP. So these were the kind of service providers
of the cloud. So here we have a quick revision or a quick comparison between all of them. So when we categorize each
one of them, we have AWS, Ashure and GCP. So the computing engine that we have for AWS it's EC2 that is elastic
cloud compute that is used to like make virtual servers or here's the like instance that we have created in EC2
there where we perform all the tasks that we have to perform in the virtual servers. Now similarly as EC2 and AWS we
have virtual machines called Azure VM in Azure and compute engine in GCP. Now talking about the storage for storage
AWS uses S3 and Azour uses blob storage and GCP uses cloud storage. For databases we have RDS and Dynamo DB for
AWS, Azure SQL and Cosmos for Azure and BigQuery and Fire Store for GCP. Again I'm repeating the same thing. If you
guys have any issues in understanding these concepts, kindly stay tuned to the video further because we will be
understanding each of the terms in detail in depth by the trainer. So for coming to the serverless engine if you
want to go serverless in any of the crowd providers we have lambda for AWS Azure functions for Azure and cloud
functions for GCP. Now serverless means that since you don't need any hardware for like having the access to AWS Azure
or GCP there are times when you go you can go serverless also like you won't be needing any kind of server here. So you
can just have the access on your local device to run all the different types of virtual machines or run the different
types of whatever the codes you are using in it. So for DevOps we have code pipeline. Similarly similarly we have
Azure DevOps for Azure and code build and code run for GCP. Now these are the leading forces like the AWS or Azure or
GCP. These are the leading forces behind the cloud providers that we have today. And knowing the all the services that we
have discussed so far will help you to like optimize the you know the right tools or the right uh options to make
the scalable solutions from all these things. So as we have moved further in the video so there's a basic
demonstration for you guys. We have AWS EC2 for the first instance and second we have virtual machine in the Azure. So
let's just take a quick demonstration of each of the following. So first of all, let's start with AWS. As soon as I click
on the video, we have a quick login page here where you can have the credentials or if you don't have, you can uh create
a new AWS account also. But for now, I'm using my personal account here. So as soon as you log from your like
credentials, what happens? This is a default page that stands in front of you where it is written EC2 because we are
trying to create a virtual server here only. We are trying to create an instance. So what I have to do I have to
just click on the EC2 instance and here I'll be searching either I can search for the instances or I can create
instance whatever you want to do. For now I have seen I'm able to see the instance that are running. So it's
showing zero because we haven't created one yet. So let's just create one by clicking on it. So you can see every
instance that would be running or that is terminated or that is has been started will be showing here itself. So
let's just create one by creating launch instance. We just have to click there and here this kind of default page
should be coming to your interface. First of all we have number of instances that we have like you can create 1 2 3
as number of instances as you want but for now for the demonstration purpose I'm just creating one. Now we have to
give some sort of name and tags for our virtual server like what kind of server I'm using. So I'm just creating my first
EC2. So this is the thing that I'm creating. Just give your name a. Okay. Now we have to create uh you have to
select the application or the operating system that you'll be working on. We have different sorts of operating
systems here. We have Red Hat, Windows, Ubuntu, Mac OS and Amazon Linux. You can use any one of them but for now I'm
using I'm selecting Ubuntu because it's quite friendly for beginner friendly for all of us. Now it will take you time to
roll. Okay. Then we have many types of VM images like whatever you want to use but for the free tier that is eligible.
I'm using the first one that is Ubuntu server. It's a confirmation page but I'm not confirming it because the thing was
already selected in here. So we have a architecture like whatever kind of architecture you want in here. So I'm
selecting a 64-bit because we are just creating it for the demonstration purpose. So here's the instance type.
Now this is very important where you're working with it because we have a lot of instances type be nano micro or you have
small medium whatever the kind of size you want for your instance. So because of the free tier availability I'm using
t3.micro and here's the interesting part that is key login. Now you can create your own key pair as per your demand
whatever is comfortable with you. So for now I can see the whatever the key pairs have been created here I can use it. For
now I'm using my own creep pair or is it not available? Let's just try anyone. So these are the existing keep pairs that
have been already created by some other users who have been sharing this platforms. It's not that it can be
shared with you and me also. It's just a kind of organization or the group that I am willing to share my key pair with. So
it's that thing. Then we have the network settings. You can allow the SSH traffic. You can allow the HTTP or HTTPS
according to your will. And after that you just have to launch the instance whatever the sort of instances you want
to create. After launching it if any issue will occur it will be showing here but if it's not that the instant will be
created. So you can see we have got a success and it's our ID that the instance ID that has been created. So we
have to again go to the instances and here we can see that our instance is ready. Just trying to wait for it. Yeah.
So here I have my first instance. Now it's not enough for us to just create an instance. We also have to connect it to
the machine. Now what happens? We have created an instance but AWS doesn't know it yet. It is showing the instance is
running but it's not connected. So to like have the access to the virtual server to like a Ubuntu type of machine
or the operating system where we will be working on. So we have to connect it first by just clicking on it and we have
to just connect it. Now you can either connect it using a public IP or a private IP. Now this is the you know
deployment model that we were talking about. Whenever we are using a private IP it will be just private for all of us
and it will be just be accessible from my own local platform also like I can open bash or c cmd prompt or whatever it
can be accessible from that also but since right now I'm just using it for the demonstration purpose I'll just
using a connecting public IP but both the IP addresses will be showing on the instance that I have been created. So
let's just connect it and see what happens further. Now as soon as I create my EC2 instance is ready to use now. So
here you can see by the time it's establishing the connection that I having both the private and the public
IPs here. So this is it. Now you can perform whatever the uh operations you want here. Either you are from DevOps or
the cloud technology whatever the instances uh has been created is free to use for every one of you. But one more
thing that you have to remember is that after it is started it is payable. Okay for whatever time you are using it it's
payable. it is having some sort of cost or you can say a value. So whenever you are done like for now we are not writing
anything in it you can just write anything for suppose I'm working on devops or I want to install docker or I
want to see if docker is already installed in here or not. So this is the command that I move with. So it's
similarly as we do in Linux or windows cmd prompt. So that's all for the installation and the demonstration part.
So we can cut it from here directly. But you have to remember that you have to terminate this instance. Why? Because if
this uh keep on running. So as you know that cloud computing is a kind of pay as you go system right. So for the whatever
time you're using it it will be chargeable. So you have to delete it after you have created it you have used
it for your purposes the whatever thing whatever demonstration you wanted to use you have done it. So just click on it
and go on the instance state and try to terminate or delete the instance. Now just terminate it from here. So yes it
has been successfully initiated the termination and you can see in the instance state that was shutting down
and the instance has been gone. So this was the part where we used AWS EC2 instance to create a virtual server to
see like whatever the things we can done in here. Now there we have one more key provider that we have that is called
Azure that is again very popular. So like this then I'll just I have just skipped the login page part. You just
have a login page where you sign up or login using your credentials. So this is the default page that opens right here.
Now we knew that in AWS we were creating an EC2 that is elastic cloud compute instance to create a virtual server. For
here we have a virtual machine. See you can see here all the services that have been provided here. But for working on a
like cloud server for working on a different operating system we just create an another resource that is
called a virtual machine. Either you can click it here or just click create a resource or just click on virtual
machines. So as we were having like EC2 instances in AWS here we are having virtual machines. So the similar type of
thing like you can see that as you saw in the EC2 instance all the things were happening here itself. So the similar
thing all the virtual machine will be created here and the steps are quite simple. You just have to create uh one
virtual machine and you have all the options but we are working on a like hosted platform. So we'll be creating
one virtual machine here and we we will be having the similar steps that we had in that. Well, now you have the resource
group whatever you want to use whatever the name you want to give to the virtual machine the name can be provided the
security type the architecture here there we were using 64-bit here you can also have the same sort of architecture
then we have the administration account you can either use SSH public key or you can use a password like it can be your
username your password and whatever you want so after this you just click on review and create as soon as you do it
what does a does it will review it automatically like if there's some sort of problems or you haven't put something
right so it will display it right on your screen and you will be directly redirected to this page only. So when
you just review and create you will create an entire virtual machine similarly as an EC2 instance that we
created in the AWS. So this was it for the demonstration part that we had in the AWS and Azure because these were
very quite a popular platforms that we had in our minds. So this was the demonstration part. Now as we are coming
to the last section of our video, we have something interesting for everyone of us that is benefits, challenges and
the career paths that we have after learning cloud computing. So let's just dig into it. Now let's just first of all
talk about the benefits that we have along with it. We already know cloud is a kind of like ready to use software or
you can say that it's a very beginner friendly or everyone's like to use this type of technology when we have. Okay.
So coming to the benefits that we have first of all it's costsaving of course as we have seen in the EC2 and Azure
virtual machines also like for the time you are for the duration you are using it or you are accessing it you have just
have to pay for that that amount only it's just like having a 10 GB storage and you are paying for the 10 GB only
not for the 100 GBs like you used to do back in the days when you were having physical IT servers so that's the thing
and it's uh see is the same meaning that you pay only for the time you use and there is no upfront hardware investment
and the real life analogy is simply like having an Uber instead of owning a car when you are new to a city. Now talking
about speed and agility, you see how easily we were able to like launch an instance or launch a virtual machine. So
it's that much easy to launch a resource whenever you want to work with the cloud technology. It just like ordering in
food from the app. Now the scalability see we have seen like whenever for suppose you and me both are using the
AWS EC2 instances there are lot of instances created there. So what does AWS servers does it just automatically
like grows or add the servers to like maintain this much traffic and when the traffic is low like only one or two
instances are there it just automatically reduces the traffic like that. So this is the very good you know
benefit of having a cloud service that it can like expand or shrink like a balloon according to their service. You
don't have to worry about anything in that. Next we have global access. So of course you can use it from anywhere like
you can open your EC2 instance right there or you can open an Instagram just for the instance if you want to have a
clearer version you can open the Instagram anywhere like if you are going to traveling to some other place there
also you can open an EC2 instance or Instagram just like you want. So it's a ready to go system. You can use it
anywhere, anytime at any device. So just like your Netflix, Gmail or your popular app called Instagram. So next we have
security and backup. See cloud providers offer advanced security whether it's being an EC2. So you have a keep your
login area to keep the things very private and very uh you can say secured. So similarly we have in virtual machines
we have username and passwords that we didn't create but you can create one. So it will keep your things in the virtual
machine secured and private to you. So that is the important thing that cloud provides and it's just like keeping your
valuables in the back. So what happens is see the benefit the major benefit that clouds come with it is it doesn't
make it faster. It makes it the innovation very cheaper and very scalable and very global to use. So
these are the benefits that comes along with the cloud. Now since you know that cloud is powerful it's kind of uh having
lots of benefits but it's not the magic okay so we have some sort of challenges in this also talking about the
challenges as we discussed even the security is very much highly provided in this but there are some sort of security
risk that we have in this because we can be having like misconfigured services that can expose the data of one country
to another or one bank to another that is quite a challenge to be secured about. The example can be like public S3
buckets being leaked like the S3 buckets in AWS which is used to store the data and the files can be leaked at times if
not kept properly and if not like kept under the eye. So this is a one of the major challenge that we have in cloud
computing. Now cost averance a very simple example can be an EC2 instance like if you have created an EC2 instance
and you have kept it running like you have done your time but even if you kept it running for a day or two it will be
like causing you a hefty amount to pay because that's the thing like if you're overusing it and if you're not making it
available at the time when it is needed. So you can just overuse your spike bill this so it can spike your bills and the
lack of monitoring can make you pay hefty amounts at times. So again the example is same that leaving your EC2
instance at running. Now compliance and legal it's just a simple thing that some data must stay within the certain
countries like financial systems or the healthcare or the government data should be kept under one government like the
Indian government is having some sort of data that shouldn't be get leaked to the American or Chinese government. So
that's all about it. So it's the compilers and legal terms that we need to take care about when we are dealing
with cloud computing especially if you're working as a key cloud provider. Talking about vendor lockin, see
migrating from one cloud to another can be a struggle at times like EC2 instance or there's few types of services, a few
type of apps that are AWS friendly but you kind of have will be having some sort of issues while working it with
Azure. Why? Because AWS handled by some other company and Azure is being handled by some other companies. These two
things have the some different sort of features or different sort of technologies. So it can be a issue that
when we are working with the AWS friendly app it can be little difficult to work with the Azure in that. Next we
have downtime in environment. Now what do we understand by downtime and internet? See you know that the entire
cloud computing is physical hardwareless like everything that you need is on the servers on the internet. So the very
important thing that you have to be highly dependable on is called internet. If your connectivity is slow or the
cloud provider uptime is being reduced or exceeded so it can impact the real world times. Okay. What does it means
that for suppose you have uh less internet connection while you're running an EC2 instance or a virtual machine. So
you can list you know make the connection server break or few of the commands that you have written in it
might not be able to be saved. So these are the things that can impact the realtime apps in the Azure or AWS
whatever you are working with. So again it is powerful cloud is very much powerful but you have to handle it like
you need to be smart and you need to have a good setup to like make it uh cost control and security proof. So
these are the challenges that we have in cloud computing. Now the last part and the very interesting part for everyone
of us that what are the roles that we have to offer when you learn cloud computing. So you see if you are into
cloud engineering, you love to build and manage cloud infrastructure like you love to like have a cloud infrastructure
and you like to build something out of it, something new or something relatable for everyone. So you can be a cloud
engineer after learning the skills that are AWS, Azure, Linux and Terraform. So Terraform is a kind of a DevOps topic
and similarly Linux is a different technology but it's again an operating system and AWS Azure again that we have
worked on it. So this is the things that you need to be a cloud engineer. Now if you're into DevOps and you like to like
automate deployments on CI/CD pipelines, what is CI/CD? Continuous integration and continuous deployment. So if you
like to have those suit of pipelines automation, so you can be a DevOps engineer. All you have to learn is
genkins, Docker and Kubernetes but very well. Now you can be a cloud architect if you like to design end to-end cloud
systems like you can have end to-end users or end to-end systems that are designed for a particular kind of
provider or a customer. So you can have be a cloud architect in that and you have to learn architecture and deep
cloud skills for that. So that is another thing. Then we have a cloud engineer who handle the data pipelines
and storage and you can see that you have to learn bigquery and azure data factory for that. Now you can learn
anything and everything in detail if you are clean into the cloud and if your fundamentals are quite clear at the
beginning. So talking about the very popular term and very popular job role that is among all of us that is AI and
ML engineer but that to cloud dominant like you train and deploy the models on the cloud. So you need to learn vertx AI
SageMaker and very important language that is in data science AML is Python. So these all are the things that you
need to have for the a IML engineer. Now similarly we have another two job roles that are quite familiar that is cloud
support and cloud security analyst. In cloud support engineer what you do you just manage cloud bus and billing of the
setup. For that you need basic cloud and customer handling and for being a cloud security analyst you do monitor and
secure the cloud environments and for that you need IM firewalls and thread detection. So these are all the job
roles. If you're intrigued into learning cloud more DT, you have to like understand all the skills that have been
mentioned here that you have to kind of be very patient and very skillful with all these skills if you want to have a
career in all of these roles that have been mentioned here. So if you are still interested in cloud with be uh you have
to learn cloud in very depth because these were just the fundamentals that we have covered till now. So that's a wrap
up for this video. I hope that cloud doesn't feels cloudy anymore because we have understood all the fundamentals
that were essential till here. So you know that cloud is not just a trend. It's the backbone of today's digital
world and the digital power for the tomorrow's innovation. So if you're learning cloud make sure you're giving
all the time and attention it needs for from you. So you can learn cloud as much as you can and from this point onwards
our professional trainers our industry experts will carry on this video. Just a quick info guys, Intellipath brings you
executive post-graduate certification in cloud computing and devops in collaboration with IHub, Diva Samper,
IIT Rudkey. Through this program, you will gain in- demand skills like AWS, DevOps, Kubernetes, Terraform, Azure and
even cutting topics like genative AI for cloud computing. This 9-month online boot camp features 100 plus live session
from IIT faculty and top industry mentors, 50 plus real world project and a 2-day campus immersion at IIT RII. You
also get guaranteed placement assistance with three job interviews after entering the placement pool. And that's not all.
This program offers Microsoft certification, a free exam voucher, and even a chance to pitch your startup idea
for incubation support up to rupees 50 lakhs from IAB Diva Samurk. If you are serious about building a future in cloud
and DevOps, visit the course page linked in the description and take your first step toward an exciting career in cloud
technology. We discussed about the data center sprite. uh we discussed that an
organization would require data center and those data centers will hold the will have the servers inside them right
so the servers are are deployed inside the data centers servers are deployed via within the data centers now the
question comes in that what about the data centers which are owned by Amazon web services because your organization
for example you you work for a company like Accenture Vipro HCL L, Infosys, JP Morgan, AIG, Apple. Right now when when
they when these companies start using cloud computing vendors like AWS, Microsoft Azure, GCP, Salesforce, these
companies become their customers. For example, Apple says we want to use Microsoft Azure to uh to deploy our uh
workloads and applications, right? So uh in that case Apple would be the customer of Microsoft Azure or Microsoft. Okay.
So think of yourself as as a customer or or uh you are the employee for working in that organization who a who's the the
customer of any one of these three providers. The next question that comes in is that how exactly you leverage that
infrastructure which is owned by your cloud vendor. Let's suppose you're working for Apple or let's suppose
you're working for uh Netflix. Netflix is is a customer of AWS. Now Netflix uh is running all its various as the
applications workloads on AWS. Now being a part of Netflix you have to understand exactly that how Amazon Web Services has
scattered its data centers or how it is managing its data centers in different parts of of the world because Netflix
doesn't offer its own services only in US. It offers you services in a lot of countries including India, right? So you
have to make sure that how you understand its global infrastructure of AW players. You understand that how they
place the data centers in different parts or or on different countries or different continents. That's where we
disc that's where we the the concept of the global infrastructure comes into play. Global infrastructure means then
you understand exactly that how a cloud provider which is Amazon has placed this data centers and cluster those data
centers in a different uh location or what's exactly not exactly location but exactly how uh these things have been
placed in in let's suppose in Africa and India and US and Europe how they have clustered their data centers in in in
these different countries. So in this case we need to understand concept that's called global
infrastructure. Let's try to understand that. Okay. So we're trying to understand first topic I would say that
of of AWS which is the actual topic. The name of the topic is AWS global infrastructure.
Now it consists of different components different parts like regions everybody zones edge locations. uh we have uh for
example local regions or local zones I'm sorry local zones regions local zones availability zones uh we also have edge
locations but there will be two things I'll be discussing with you right now regions and availability zones edge
locations local zones we'll discuss afterwards okay let's discuss the AWS global infrastructure
let's try to understand that okay So the very first component or the part of this infrastructure includes a
region. Okay. What is an AWS region? An as region is nothing but it's a
geographical region. It's a geographical location or a geographical region or a piece of land
which consists of which consists
of a cluster of data centers. is
owned and managed by AWS.
Okay. What's a cluster? What do you mean by cluster? Some people ask me this question. What do you mean by cluster?
Cluster means group, right? So, a region is nothing but it's a geographical location. It's a basically it's a piece
of land. It can be uh be on the name of any city uh or maybe for example most of most of these are based on
cities or named be after countries. Uh so it can be any name. It can be based on a city like for example in in India
there are two regions Hyderabad and Mumbai. Uh for in in Australia Sydney and Melbourne are two regions. For UAE
they have a separate region. Uh for let's suppose for South Africa, Cape Town there's a there's a separate
region. Singapore is a separate region. Right? Japan is a separate region. A region is nothing but it's a
geographical location which consists of a group of data centers which are being owned and managed by Amazon web services
which is which is a cloud provider. We are the customers right like we let's suppose we work for Netflix we work for
JP Morgan we we use AWS to deploy our resources into so those custom data centers are used by us right but who
owns and manages those data centers or cluster data centers it's the cloud provider okay so a region is nothing but
it's a piece of land it's a geographical location where the cluster of data centers the group of data centers which
are being owned and managed by Amazon web services exist over there. I would say it's a geographic location or it's a
piece of land where Amazon's owned and managed data centers run and they are being operated by Amazon. So Amazon Web
Services says let's suppose in Cape Town I have my three data centers running. So I mark Cape Town as a region right. So a
region is a geographical location which consists of data centers a chain of data centers. Now in these cluster of data
centers are being denoted by a special term. Okay. So what what is that special term that we use over here? This cluster
of data centers that we these cluster of data centers are denoted by a term that is called availability zones.
What's that? It's called availability zones, right? Okay. Availability zones in the short form are called as.
These are the data centers or these are the facilities um consisting
of racks of servers. These are data centers only. Right? So these are the facilities which houses
racks of servers. Right? These are the facilities which consist of racks of servers. Now one
availability zone or one a is equal to one or more data centers.
A single a can have one or more data centers. Okay. And in each region
there are at least three a right to make your data more redundant
to make your work uh applications more available. So basically these are the two terms you have to remember and
learn. A region is a geographical location which consist of a cluster of data centers. These cluster of data
centers are called availability zones. So everybody zones are the data centers only or they the facilities which
consist of rack of servers. One a equates to one or more data centers and each region will have at least three a
there could be more than three a but you'll find minimum three availability zones. Fine.
These are the two terms you have to remember. I'll just give a few examples over here
so that you understand that in uh understand this uh concept in and out. Okay. Now I've just logged into my
YouTube sp console uh to my account. You can be seeing one thing is that once I just go to the top right hand corner
right I'm talking about this thing where you see my uh this cursor over there. Uh in this case you're going to be seeing
that these are the listed regions over here. North Virginia, Ohio, California, Oregon, Hyderabad, Melbourne, Mumbai,
Soul, Singapore, Sydney, Tokyo, Canada, Frankfurt, Europe. These are the existing regions, right? Frankfurt,
Ireland, London, Paris, uh uh Stockholm, uh we have Cape Town, Hong Kong, Jakarta, Milan, Spain, Zurich, right?
Right. So these are the regions. It shows exactly the regions and it gives the the details of the regions and
the availability zones which are owned by AWS. Okay. So in this case if I just go back and show you exactly this is the
map. So as of now they have 31 large regions. So uh presently when when I'm talking to you they have 31 running
regions which are running to 24 + 7 right this is the map that shows up on this map the green dots that you can see
these are the existing regions that's why we say a region is a physical location it's a geographical location
it's a piece of land for example so the green dots you can see those green dots represent regions for example this green
dot dot. This green dot represents Sydney. This one Melbourne. Uh this one Jakarta, Hyderabad, Mumbai, UAE, Parin,
uh Spain. Uh this is for example North Virginia. Uh this is for example Milan, Spain, Ireland. Right? So the green dots
represent existing regions. Each region is a physical location which consists of a chain of data centers or a group of
data centers and those group of data centers are called as availability zones. Now the one that you can see in
the uh in the form of red dots these are the upcoming they are coming soon which means that they will be set up in the
next few months. It's work in progress. Amazon is still setting them up uh because uh launching a chain of data
centers take a lot of time uh and um it is still in progress and very soon you're going to be seeing these data
centers being showing up. So as of now you can see the exact details. There are 99 availability zones. Now each
availability zone can be one or more data centers. Generally we assume it's one data center. Okay. So let's suppose
there are 99 data centers. There could be more than 99 99 data centers in 31 geographic regions. All right. With
announced plans for 15 more data centers or more than 15 could there could be a possibility that one a because in there
few as we consist of two data centers. So 15 or more availability zones in five more regions in Canada, Israel,
Malaysia, New Zealand and Thailand. So I think by the end of this year you'll be seeing most of these uh the
regions being deployed. Okay. I think Canada will show up because uh uh they announced Canada uh region and Montreal
few months back but let's see. Okay. So these are the the the regions which exist and some of the regions they are
still in plan. They will be deployed very soon. Fine. Now um I can also show the the the definition just in case if I
just go this this other link. Um okay so you can see that AWS has a concept of a region which is a physical
location around the world where we cluster the data centers. We call each group of logical data centers as an
availability zone. So that's why I said an availability zone can be one or more data centers and and there are three
availability zones at least in one region. Okay. So AWS has a concept of region which is nothing but it's a
physical location around the world where we cluster the data centers and uh this group of logical data centers are called
as availability zone. Each region consist of a minimum of three isolated physically separate aes within a
geographic area. So each region will consist of three availability zones within a geographical area or in a
region. So that it uh uh they isolate any hardware failures, any power outage. So I think there's a distance of around
50 to 60 km between each each of um the easy that's the the minimum distance. So if if one easy um so the distance
between two easys is at least 50 to 60 kilometers right. So because if if one easy goes
down because of an power outage u maybe because of floods natural disaster you can easily have other regions other
availability zones uh serving you uh the services in form of uh workloads and applications running inside them. Now
what what is an availability zone? And a variability zone in the short form we call as an a is one or more discrete
data centers with the redundant power networking connectivity in an as region right so it's a kind of a an is equal to
one or more data centers which has its own power networking connectivity inside that region okay let me just give you
two two three examples then then I take the questions forward let's take uh one or two examples first thing first we
need to understand that Each region is given a special name. For example, uh the one which is labeled as US East one,
that's the North Virginia. What's that? North Virginia. So basically, they don't go with this uh with the geographical
names. They give their own special names. For example, uh the one which is the US East one. US East one means that
they're talking about the Northern Virginia, right? So for California, it's okay. How they labeled US East one?
North Virginia is on the east coast of US, right? And this was the very first
region on the east coast of US. So it they labeled as US East one. Now by the way, US East one is the very first
region Amazon set up when they started giving AWS uh as or when they they started as a cloud provider. So this is
the Amazon's web services first region. So this this is the on the east coast of US the very first region. So they
labeled as USC one. Ohio is the is the second region on the east coast of US. So it's USC 2. On the west coast the
first region uh is uh California US west one and US west 2 is is Oregon the second region on the west coast.
For example, let's let's take in India, we have AP South one. AP stand for Asia Pacific. In the southern region, the
very first region was Mumbai, AP South one and the second was Hyderabad. So, it's AP South 2. You don't have to learn
these names. So, they have labeled these names. They've given us uh the abbreviation or special names for these
different regions. Okay. So, for example, let's go with US East one which is North Virginia. Okay. Let's let's
take this one. Uh let's go with Mumbai. Okay. Mumbai was very first region uh in India. Launched few years back. I think
2017 or 2018. I'm not sure about the exact date but anyhow every region you have every region is a
separate infrastructure on its own. It is independent from other regions. Fine. For example, if your deployment
resources in Mumbai, they will stay in Mumbai only. They will not be visible. I would say they will not be the part of
Hyderabad or California or Virginia. So assume that every region is its is a complete infrastructure
on its own by itself. So for example, if I use Mumbai and deploy my resources in Mumbai, they stay in Mumbai. Of course I
can access them remotely from other location but they would be the part of Mumbai region that's why you have to
switch dashboards between regions right for example right now I'm in I'm in Mumbai dashboard so I can access my
resources in Mumbai similarly if I switch to California so in this case I'll be uh I'll be accessing my
resources in California understood so each region is its own uh it's it's infrastructure on its own. I would say
it is a complete infrastructure where it's independent from other regions. That's the reason reason you have to
switch back and forth between different regions to access the individual dashboards. So if I deploy my resources
in California, they stay in California. If I have to let's suppose access my sources in Sydney, I have to switch to
the to Sydney's dashboard. That's reason uh that's the the reason once I switch between different regions the link or
the URL changes at the top right so every region is is isolated or it's kept separate from other regions uh if I say
uh it it is a separate entity this means that it it has its own identity right so to access and deploy resources in Sydney
I have to explicitly go to Sydney switch to the dashboard and start launching my resources inside Sydney. I can't deploy
resources uh in Sydney while just using Melbourne dashboard. I can't do that. Okay. So that this thing you have to
understand. Okay. Let's let's take the example of Mumbai. I was trying to give you some examples.
Mumbai is labeled as AP South one. This is a special name assigned to it. Okay. Now let's suppose I want to show you uh
we I I want to see the availability zones in Mumbai. I need to see the availability zones inside Mumbai. So I
go to EC2. E2 is stand for elastic compute cloud which helps me to deploy the virtual machines in the Amazon's
cloud. So if I want to deploy my virtual machines or instances or EC2 instances in the cloud, I use E2 dashboard which
stand for elastic compute cloud. Very well. Now if I just go to Mumbai right now, let me show you the Mumbai region.
Okay. So in this case, this is the region of Mumbai which is labeled as AP South one. What's that? It is AP South
1. Okay. Inside AP South one, I have three availability zones. AP South 1 A, AP South 1 B and AP 1 C. Right. So what
is APO 1? It is the the region of Mumbai. This is the one. So APO 1 A, 1 B and 1 C are the three availability zones
which are interconnected with highspeed fiber optic cables. Okay. So these are the three clustered interconnected
availability zones which operate inside Mumbai. Fine. I'll give you two two or three examples uh two or three more
examples to give you some more clarity. Uh let's go to Sydney. Sydney is labeled as AP Southeast 2.
Okay, let's go there. Now AP Southeast 2 is which region out there? AP Southeast 2 is is the region which is the S uh
Sydney. So uh AP southeast 2 is Sydney uh or the region of Sydney. So AP southeast 2 A which is this one 2 A 2 B
and 2 C these are the three interconnected regions sorry availability zones what
are these I'm sorry of time these are what are these these are the three availability zones or three a fine these
are three which are interconnected with highspeed fiber optic cables And where are they being deployed? They launched
inside this region where Sydney is AP Southeast 2. So AP Southeast 2 is the region of Sydney which consist of three
interconnected variability zones. AB Southeast 2A, AP Southeast 2B and AB Southeast 2C. Let's
take one more example. Let's uh jump to the next region which is Frankfurt. Frankfurt is EU Central one. If you
click on that, you will see that immediately the page will change. The details will change automatically.
So Frankfurt is represented using this term which is EU central one. It's the central part of Europe. Frankfurt in
Germany. So EU central one. It consists of three interconnected availability zones. They're labeled as U center 1 A,
E 1 B and U center 1 C. So these are the three as interconnected and located inside EU central one which is a
Frankfurt region. So a region is a is a it's a geographical location which consist of at least three as or
availability zones. These availability zones are represent group of data centers. There are at least three as
inside a region at least. Um I'll show you North Virginia consist of most of the ACS. If I go to North Virginia, the
very first region which was set up by Amazon Web Services. So North Virginia is labeled as US East
one. What is that? It's the US East one. Fine. It consists of these six availability zones.
1 A 1 B 1 C 1 T 1 E 1 F. So six availability zones inside the region of North Virginia where is US east one.
Okay. So this is the concept of regions and availability zones. Now the thing is uh I know this question will come up um
that uh how we achieve high vibility between them. Can we use all the regions or all the
availability zones? The answer is yes. Um we have to make use of some tools like load balancing, global accelerator,
route 53, autoscaling to disperse and scatter your resources across these as regions and use them as
a part of a single application. How they cluster the data center? Uh so basically the data centers are
interconnected with highspeed fiber optic cables. That's how they cluster them.
Okay, I guess this is clear now. How to make use of all of them or some of them? We're going to be discussing the
services based on the services that we discuss. We're going to be understanding each one of these different uh u I would
say how to use them. Right now we we just seeing them how that these are the list of regions, these are the list of
availability zones. But how to make use of them? We'll be discussing that when we start uh with the process of
launching of services. Right? For example, if I deploy the servers or virtual machines, I can explicitly
choose the region of my choice and the availability zone of my choice. I can I can do both. But once we start uh with
the launching process, we start understanding that stuff how to use them. Okay. So to access the
availability zone, you have to go to EC2 dashboard and there's the service health status or service health dashboard. This
is where you can see a list of availability zones inside the region. All right, that's about the the stuff.
Very well. So guys, let's get started with the next topic which is quite important for us to understand which is
the the topic name is EC2. Uh now the EC2 as a topic is quite uh extensive. Uh it requires a lot of
things to be discussed. So we're discussing all of all of those things one by one include the basics like
instance types AMI and all kind of stuff. All right. So what is easy to what exactly that that concept is all
about. Now when you deploy when you start launching your services and applications it is quite important that
you start working on a a concept that's called easy to or elastic comput cloud. Let's try to
understand that concept. It is uh quite important. So elastic comput cloud what do you mean by that? Okay let's let's
understand what is the full for full form of it and uh what's the exactly the concept is all about elastic compute
cloud means that you get the resizable compute capacity in the cloud. Elastic means it can be resized. It it is quite
flexible to use. Okay. Elastic computer cloud. So elastic computer cloud is a service. I explain
you the the definition in in depth. So try to understand the definition because I'll be explaining everything in depth
uh to you so that you don't face any trouble uh when while working on the service. Uh basically we'll we'll start
uh with the um uh with the uh basics including the AMI and the test types because those are the the two basic
things we have to discuss first. So elastic comput cloud basically it provides resizable compute capacity in
the cloud. It provides you with the resizable
compute capacity in the cloud. Resizable compute capacity
in the cloud. What do we mean by resizable? I I'll come to that. I explain you every single
um word of this definition to you so that uh later on you understand exactly what
it means. So it provides you resizable compute capacity in the cloud. Okay, let's take one example.
Let's imagine that this is the AWS cloud that I've I have which in which
I'm running an I'm running a virtual machine. This is a virtual machine running inside Amazon's cloud or Amazon
Web Services cloud. Now, this virtual machine is labeled as an EC2 instance. Now, Amazon says that
this is an EC2 instance. What's an EC2 instance? It's a virtual machine or it's a VM
running in the AWS cloud. So basically it's a virtual machine.
It's a VM running inside Amazon's cloud. It's a virtual machine. It's a VM which is running inside Amazon's cloud in in
AWS cloud. Fine. So an ES is is a virtual machine which is running in the AWS cloud. That's a virt uh that's a EC2
instance. Now this ES has some compute capacity. What do you mean by compute capacity? Imagine you want to buy a
smartphone or maybe uh a tab, maybe a computer, laptop or a desktop. You look for its physical specifications.
What do you mean by physical specifications? For example, the amount of storage, RAM, the processor it has,
um the graphics. For example, I want to buy an Apple laptop that's MacBook. So, I will see whether it's a Intel chip or
silicon chip. uh if it's a Windows laptop I'll see whether it's a it's the uh for example uh Intel laptop or it is
based on uh for example there are multiple chipsets which are available in the market uh what's the amount of RAM
I'm getting uh what's the the processors it's giving to me uh the storage whether it has dedicated graphics or not right
how many what's the amount of cores it has right nowadays we uh go for eight core 12 core chip. So we go with a
compute capacity. Uh that's called compute capacity because we uh that compute capacity offers at how powerful
that machine is. Right? If I go with an 8 core 32 gig 8 core CPU with 32 gigs of RAM, 2 TB of of SSD storage, it's a
decent performance I'm getting from a laptop. So this is what you do when you when you
go and buy a computer for yourself. But now now you're not going going to buy a computer. You're going to go for a
virtual machine. See whether it's a virtual machine or it's a computer. Virtual machine is a computer only. It's
a it's a server only. Server computer virtual machine. It's the same thing. It's only the the only thing is that
because you can't see it physically in front of you. It's virtual in nature. So a virtual machine has some compute
capacity. Similarly as if you see the computer capacity of your mobile phone of your laptop or the desktop a server
anything you want to buy. So in terms of so this here compute compute refers to the compute capacity refers to
here the compute capacity refers to let me just uh correct it over here with using the full
term here the compute capacity refers to the amount of what's the processor you're using for
example um uh for example Let's suppose it's the amount of RAM
coursees storage graphics,
network performance, etc., etc. This is called compute capacity. What's
the amount of RAM, cores, CPU, storage, um, network performance that computer, that virtual machine refers to. This is
called compute capacity, right? I've already given you a very layman and common example. Tomorrow you go to a to
a store and buy try to buy a laptop. But you will see what's the RAM you're getting, what's the process you're
getting, how many cores are there, uh what's the graphics it's giving to you, you see all those things, storage to
you. Similarly, in terms of instance, we we talk about the amount of RAM, codes, storage, graphics, network performance,
etc., right? So, we we we see all these things. We see all these things. AWS uses uh basically Amazon uses special
names for everything. All right? In terms of Amazon Web Services, this compute capacity is
represented by a special term. So, Amazon likes using acronyms for everything. So, this compute capacity is
is depicted or decided by a special concept or a special term. That special term or
that special concept is called an instance type. What's this type? It's a virtual
representation. It's the virtual representation of the
underlying Compute capacity offered
by an instance.
What is the compute capacity that this this a specific instance offers to us? This is represented by a special term
that's called instance type. Okay, I'll give you some examples over here. Let's take some examples. Let's understand
some of the examples. For example, there's a instance type which exist in AWS that's called T2.micro.
I explain you every single thing. Don't worry. Uh it is T2 micro.
Okay. Now T2.micro is a special instance type which has this compute capacity. Following is a compute capacity of
T2.micro instance. It has 2.5
GHz of Intel Xeon
processor. Right. uh and it has
one GB RAM. Basically, we use 1 gig or 1 GB the same thing. One GB RAM.
Um it has one virtual core. It's a one core only. I'm sorry. It's a one core.
We call it as one virtual CPU. We use the term this thing over here. Um,
and it you can attach from 8 8 GB
um to 16 I'm sorry it's 8 GBTE to 16 terabyte
of solidstate disk or hard disk drive storage. And it offers to you low
to moderate network performance. Okay. So these are the physical
specifications of an instance type which is TD.micro. This is free of cost for us. We can use that in a free tier
account. Um I'll show you the list of them. So basically I I'll I'll come to the same concept one more time. And is
is nothing but it's a virtual machine running in the AWS cloud. It offers you some compute capacity in the for the in
the form of amount of RAM, cores, storage, graphics, network performance etc etc. This has been
depicted by something called instance type and assist type is nothing but it's a virtual representation of the amount
of compute capacity a specific instance offers to you. We make use of uh special terms for example T2.m micro uh it has a
the following compute capacity 2.5 GHz of Intel processor with 1 GB RAM one core um one core CPU you can attach from
8 GB to 16 TB of SSD storage and gives you that moderate network performance is it basically is it is decent to deploy
small application fine now I list of them. If I go to my let's let's go to um okay so for example
I go to EC2 dashboard on the left hand side okay I just go to EC2
right now once it is go to EC2 I just go to the instance types on the left hand side
why it's not loading up. Let me just go to different region.
So I just go to EC2 right now. I go to test types. It says loading test types,
right? As if now let's suppose in North Virginia I have 64 test types.
these systems types like it's kind of a model for every instance kind of instance you can deploy. Let's suppose I
let's start the first example T1.micro. Now it gives me one one core it's a one
core CPU. This is the architecture I it uh we can use uh for that. This is the memory. It gives me 6 GB only. That's
it. And very low network performance. T2.micro it give me it gives you it is a one core CPU
with only 1 GB RAM and load to network modded uh network performance. Let's take some more examples out there.
Uh just give a few seconds. Now how you decide this test time it depends on a number of factors the types
of application services you'll be running. Okay for example I want to run a MySQL database. I want to run a MySQL
database application on the package bond instance. Um, in that case, let's suppose MySQ
MySQL vendor says at least one gig of RAM is required. If you want to run a MySQL database upon a on a virtual
machine, I need at least one G gig of RAM. If you want to deploy let's suppose a MySQL enterprise level database, I
need at least 4 gigs of RAM, right? So basically uh the type of application services you should be running upon
these instances you have a list of instance type that you can choose from and whichever closely u matches that
specification you choose that. Second thing is that when when once you start uh launching the service and
applications the because on these distances you'll be launching software applications the software vendor or the
software as a service provider will give you the recommendation that this is the best test we should be using so that our
application can run properly right this is the bare minimum specs that you need whenever you deploy any software let's
suppose forget about his test type whenever Whenever you you uh download any free software opensource software
from from the internet now you see that at least uh 1 GB storage is required uh for it to run properly at least 1 GB RAM
is required so every vendor gives you a specific phys uh specification for it software to run properly whether it's my
database Oracle database uh Apache uh proxy server uh let's suppose you have to deploy a
python uh web application So when you start deploying the application and services um based on
your past experience and also uh based on the vendor recommendation you have to go for the specific instance uh specific
instance types no option of customized testes there are more than 64 instance types okay so Amazon has already created
a a complete list of six instance types you can choose any one of them instance type is nothing it's a make a model of
an instance it's in make and model of instance right don't mix it with elastic comput cloud okay now why it's called
elastic comput cloud I'll come to that it provides resizable compute capacity that is go back resizable or elastic
what do you mean by that this thing this means that whenever you deploy any instance with a specific instance type
it can be changed afterwards okay we'll we'll we'll come to that afterwards but we'll discuss the resizing options later
on because it will confuse you. Do you are merging two concepts together in terms of resizing options?
We have two options to resize our instances. One is vertical scaling. Vertical scaling means
that you can change the instance type. It says go back Okay, for example, today you deploy the
instance with an instance type T2.micro. Okay, which offers you only 1 GB RAM. Now you say 1 GB RAM is not sufficient
for my needs. I need 4GB. Okay. So what you can do is that you can change the instance type
of the running instance right on the fly. Now of course you need to stop this first. We'll discuss the
process afterwards. Let's suppose I want to have 4 GB RAM right. So let's suppose I go with
T2.Xarge. Not I'm sorry. Yeah, sorry. T2.m medium which gives me 4 GB RAM. So in this case
I can in change this test type of a virtual machine and I can say that I want to change it
to T2 dot medium which gives me 4 GB RAM
right this is called this is called elast this is a kind of a one of the concept of elasticity
this is elastic it doesn't means that if you if you choose to make a model of instance type, you're fixed to it.
It's malleable. It's flexible. That's why it's called resizable. Don't confuse that with uh the definition of the
instance type. Instance type is a make and model of the instance
because you can change the instance types later on of the instances. That's what's called
elastic compute cloud instances because they're elastic. They're flexible. You can change the instance
types or the make and model or you can add on more RAM storage afterwards. Okay, this is called vertical scanning.
There's a next type of scaling which is called horizontal scaling. What do you mean by horizontal scaling?
Horizontal scaling means go wide. Now this is ad recommended. Go wide. You widen up.
This is which this is what this is. This has been recom this is recommended by the players.
This is more beneficial recommended. Right. What do you mean by horizontal
scanning? So in this case if you go with vertical scaling what you do is that you have the same physical I'm sorry you
have the same physical server right you're just sharing instance type of it you're going from a small smaller uh
test type with 1 GB RAM to a bigger instance type with with 4 GB RAM you're going back right horizontal scaling
means you go wide which means that overall you want to have 4GB compute capacity it's all about compute capacity
only right we we playing around with the compute capacity now I repeat what's the compute capacity RAM cores storage
graphics performance now uh I want to make sure that uh I want to I want to have the four same 4GB
RAM but I don't want to go I don't want to be dependent on just one instance I don't want to depend myself on just one
instance let's imagine on your instance you're running a Python application a Python web
application you're running a single application upon it,
right? It's a Python web application. The problem with this approach is because it is you are adding more
resources to a single server. What if this single server goes down at any given point of time? Let's
imagine this this single server goes down. Your entire application is down. Your entire application is running upon
this instance. It's down. You have increased the capacity. You you're paying extra amount for for 4 gigs of
RAM. But but because you're just you're dependent you're depending yourself just on a single server. If that single
server goes down, your complete AMP is down. You can't do anything about it. So horizontal scaling means horizontal
scaling is the is the is the more more modern approach. This is basically more conventional and traditional. This is
outdated. Horizontal scaling means that you need more RAM. Why don't we have that uh 4 GB RAM in four different
instances? So instead of So what we do is that instead of having
4 GB RAM in this one instance, I have four instances with 1 GB RAM each. Okay.
I distribute my request coming from for example a load balancer with discuss but I I distribute
my request coming in or the the the data that it has to process it's being distributed between these different
instances the traffic distribution basically the traffic or the request distribution
is done between them requests distribution.
It's been done between them and each of these instances process the request. So basically you're
paying the same amount over here. You're getting 4GB in one server but you're getting one G you're
getting the total amount of 4GB but in different servers. Now the advantage is that you you can place these servers of
virtual machines in different uh regions or maybe availability zones. For example, I can say let's let's uh let's
take this example. Let's imagine that uh the first two servers you can see they are um the first two servers you can see
uh server A and server B right they are in the in the region of for example in North Virginia
which is US East one and inside North Virginia They're in different availability zones
of North Virginia. US East 1A and this is running in US East 1B.
All right. The one which are there instance A and B are running in for
example EU North one which is Frankfurt in Europe. Frankfurt is labeled as EU
north one. So this is the EU north 1 A and this is EU
north 1B availability zone. So what I'm saying is I'm intentionally putting my stresses in different regions
than everybody zones inside them. So they are less prone to failures. I'm intentionally putting them into
different location distribution of the traffic or the
requests are being done between these different instances. So this is a more modern and more smart
approach. This is a smarter approach compared to this one. You're paying for the same amount 4 GB RAM. The compute
capacity you have to pay for is the same thing. But in this case, you are putting that in just one instance.
But over here, you're just putting that into four different instances. This is this concept is called resizable or
elastic. This is called elasticity. You can you can you can change this test type and go to a bigger server or you
can you can have more servers or more testes with the same test type. That's why it's called elastic compute cloud
which means it provides a resizable compute capacity. What's the compute capacity? It's amount of RAM, storage,
cores, graphics you get. You can resize into two different ways. Either you just go with vertical scaling, you change the
you go for a high instance type or you go for horizontal scaling where you go for more instances. Okay, what's the
next thing? The next thing we're going to be discussing is called AMI or Amazon machine image.
Basically the instance type that I discussed with you it is nothing but it's a physical specification
right now I'm talking about the software when you when you when you use a laptop or computer it's a hardware plus
software right instance type is nothing but it's the hardware of the instance or even though it's in virtual format it's
a hardware who decides a software Amazon machine image. Amazon machine
image decides the operating system of the instance. It can can be Linux, it can be Mac OS,
it can be Windows, right? Um, it also decides upon the applications you want to run upon them.
services, scripts, etc., etc. And additional software.
All right. All these things are the part of AMI and AMI is nothing but it's a preconfigured software template which
enlist all the software components of your virtual machine. This is this is a a preconfigured
template. This is a software template. Fine. which consists of the operating
system, application, services, resources in terms of software. So it defines a software right now using a
most most of you are using Windows laptop your Windows operating system plus zoom applications running upon it.
Uh maybe outlook, word, excel, those are additional software you're running some applications upon it. So if you if you
bundle that together it's called an AMI. If we use the same thing in terms of Amazon web services. Okay. Now
let's do one thing. Let's go back. If I just go to the left hand side, there's something called
okay I go to the right now and I there's on the left hand side you will see that there's something called
images AMIs catalog. Now these are the different categories of AMIs that we can use right quick
start AMIs uh my AMIs uh for example I'll discuss with you few few of the things for example these are the quick
start quick start are the commonly used AMIs most of the companies use them most of the organizations use them uh for
example I can show the list of them we have the Amazon Linux uh Mac OS Ventura Montre
Reddit Enterprise Linux, Bixer, uh Ubuntu Server, Microsoft Windows Server. These are different packages or
different um special images with uh special built-in operating system and some some additional applications
drawing upon them. Uh right. So let's go. These are the commonly used AMIs. You can you can you can just filter that
based on the free tier only. Free troll only means that they're only used they can use with a free tier account without
paying any charges. You can browse by Linux and Unix and also you can browse by the the type of architecture.
Okay. My AMIs my AMIs are the one which um are the ones which you create privately.
I'll show you how to to create a private AMI uh which is customized by by a customer. So we can create our own our
own customized AMIs that are called private AMIs. There are some AMIs which you can buy
from marketplace. Um there are some software vendors like Microsoft, SAP, Zent, Cisco, Juniper. You can buy uh the
AMS from them. For example, I want to uh go for Palo Alto. I want to I want to uh deploy a
firewall. Palo Alto VM series virtual next generation firewall. Now firewalls are
used to secure infrastructure. Let's suppose I want to deploy Palo Alto firewall uh on Amazon web services.
So Paulo Alto give me uh some of the details. If I go to the product product details uh or let's go to the pricing
now again uh you will see that vendor will uh let's suppose I want to deploy polo firewall this VM series virtual
next generation firefall five core security subs bundle 2 right my vendor who is vendor in this case Paulo Alto
will give me the recommendation that please use this test type C5N.xlarge X large because Palo Alto in this case
it's my software as a service provider. So being a vendor will give me some suggestions. So it's saying that please
use C5.N9X large instance it it has some the amount of RAM and CPU based on which this
firewall can run properly. Okay. Then um if I just see the list right it will show me the entire list of
uh the supported test type but uh you will see that I can choose any of these test types but C5N9
C5N.Xlarge X large is when recommended. Fine. This is called marketly pay CMIS. Uh for
example, I want to deploy any of the software out there. I can purchase a software uh from the vendor. I can pay
on monthly basis or maybe I can just go for a software subscription for one year or three-ear plan. All right. So this is
the thing. Community AMIs are it's a it's a developer community. Um, it's open-source AMIs, which means that
anyone, you, me, or any, uh, Amazon's account owner, AWS account owner can publish the AMI in the community AMI.
It's a open-source derp community based AMI. You can see that community are public. Therefore, anyone can publish an
AMI and it will show in this catalog. Most of them are used for hands-on purpose, for research purpose. So these
are the different categories of the the uh the software confusion that you can choose from. You can see uh the clear
definition and EMI is a template that contains a software configuration including the operating system
application server and applications required to launch instance. So it's a software part of instance the
operating system services applications scripts that you would like to deploy upon the instance. Okay.
The next thing that we have to understand is that how you connect to your these testes that you're going to
be running. Okay. Let's imagine you want to connect to to the testes. How you connect to this to these testes. So
connecting to your EC2 statuses. Now when you connect your SS you use
certain protocols and port. Okay, these are industry standard. These are not being invented or these are not being
made by Amazon Web Services. Okay. So in terms of connected toes, let's imagine this is the AWS cloud.
All right. So let's suppose this is a Linux instance. Right? Now you want to connect to it.
For example, you're the administrator. You're the admin. Right? You want to connect to the
systems from your computer. You want to gain access to to the systems. Now, why why
would you do that? Why would you do that? Because as an administrator, you need you need to get to the root or the
operating system of the Linux systems. You want to get the root access or operating system access or shell access.
Right? So as an admin I can say that you need to get root shell
or OS access. You want to get to the core of the server, right? Why would you do that? Because as as an administrator,
you have to start with the installation of the software. You have to install some business
applications upon the instance. For example, you need to configure something.
you need to do with something called patching. Patching means running some security updates
or additional software to be to be deployed maintenance right you want to do a lot of things for
that you need to connect to the server remotely why why use it remotely because this instance may be
running in for example in California you're in you're in in Hyderabad in India or maybe in Dubai Okay. So
basically you're trying to connect it over the internet or from far far from far distant location and you want to
gain access to it dut access share access or OS access. So in this case what happens is that if it's a Linux
instance you're going to be using a protocol. Now this is industry standard protocols and ports are being made.
These are being designed or these are being invented few decades back. If you go to Google and type in protocol and
ports, you can see list of them. They're used for communication, right? So the the default protocol that
you're going to be using is SSH. The SSH uses a port number which is 22.
It's a dedicated port number, right? It's a dedicated port number. So SSS stand for secure shell. What is
this? This this stands for secure share right this stand for secure shell. So using secure shell SSH protocol and port
S22 which it uses you gain access to the instance you connect to it right you you you access it
so you can access the the root the operating system of the of the server so that you can perform your day-to-day job
fine uh you can just do the installation of the software configuration patching maintainers So as an administrator,
you'll be getting access to the instance. So you're the internal administrator. You're the IT software
administrator or the I would say that the employee in one of the IT software verticles in your organization. You can
be an architect, a list administrator, a data administrator. You can be a developer, a quality analyst. You need
to get access to the instance from your end. Okay. Now for the authentication purpose we used uh a certain okay so I
think this is understood we used the SSH for Linux now if you run a Windows computer I'm sorry a Windows instance if
instead of a Linux instance this is a Windows instance the operating system of this one is
different right instead of running a Linux instance, you tend to run a Windows instance,
right? This is the AWS cloud that you have and this is the window instance. Now for Windows instance, so basically
you want to gain access to the window instance windows from your end. Fine. So for for this purpose or for this type of
instance instance the protocol that we use is remote desktop protocol. It is called RDP
in short form and it uses the a dedicated port number which is 3389 389.
Okay. So as an administrator you want to get to the Windows operating system or to the root of the Windows systems get
the server access. You need to get to the operating system of the Windows I would say Windows operating system. get
to this get the server access root access then in that case for Windows we use a different protocol which is the
remote desktop protocol RDP which uses port 3389 fine so the type of instance that you're
going to be running as an administrator behind the scenes you're going to be using different protocols and ports port
numbers these are industry standard and they have been used from from past many decades so this is the standard protocol
and and ports we use to connect. Fine. So this is what you do. Now the next thing that comes into this this entire
conversation is the authentication. Authentication means I want to
confirm and I want to ensure I want to ensure that this administrator
is the is the right person who is trying to connect to my instance. It's not the u any hacker or any any unauthorized
person who's just trying to get it to my instance and trying to uh get access to it. Fine. I want to ensure that only the
authorized person can get access to my instance. Right. So you need to authorize
authenticate whenever you just uh uh use your pins, OTPs, ids, passwords. Why why why do we use the them in terms of
backend transactions uh or log into our smartphones devices? Why we use user ID and passwords? for a
reason that only the the authorized person is able to unlock the devices or we we use the user ID and password for
backing transaction so that only the authorized person can make make that backing transaction from a specific
account. Right? So authentications is quite important because I want to make sure that no external user which which
is not authorized can get access to my server. So only my known people can access my instances or the servers. So
what's the methodology we use in this case? The method that we use is called key pair. Okay, it's called key pair.
In terms of key pair, there are two keys. One key is stored on the admin computer. It's
called private key, right? It is saved on the administrator's computer.
So this is my PC, right? So administrator owns a computer. This private key is saved over there.
The Linux instance over here will have a public key. Fine. So private key is stored on the
computer it's owned by administrator and public key is stored inside your Linux instance or the window instance the
mechanism. So basically the the concept is same in terms of authentication right so my instances will have the
public keys stored inside them. Okay. Now when the user tries to connect to any of these testes at the time of
connecting the administrator has to provide these the private key. So administrator says okay I am providing
the private key please match it with the public key right. So the administrator has to provide it.
Fine. So at the time of connectivity the administrator has to provide the private key which is being matched with
the public key. If the if the match is is perfectly all right then the administrator is able to get inside the
instance. You must have seen this thing where uh whenever you operate uh lockers in any
of uh the banks. So for example you have a locker in Indian bank, bank of Botaa or Adra bank, Punjab National Bank. Now
we uh generally people they own lockers in in these banks to store their valuable items. There are two keys that
we have to to open the lockers. One keys with with with bank official, other keys with with you, right? So for example,
you want to open your locker. You go to the bank, you sign the register and you're being escorted to the uh locker's
room uh along with the the bank official. So you use your key to unlock your lock and the second lock is being
is being unlocked by bank official because the bank official will have its own key or his own key. Right? So there
are two keys used to two open lockers. One key is with you other keys with with bank. Okay. The same concept applies
over here to make sure that only the authorized person is able to open the instance and get get inside it or unlock
the instance. One key is with the administrator and the other key key is that private key is stored on the
administrator's computer and the other key is stored inside the instance. So at the time of connectivity this key has to
be provided and if the if the private key matches perfectly the public key the the the contents are uh in align with
each other then in that case the administrators is able to successfully get the remote access to the instance.
Fine. So this is the mechanism we use for authentication purpose. This is the mechanism we use for
authentication purpose. Fine. So this is exactly the concept is we make sure that we try to access the instance remotely
and it's only the authorized person who's who allowed to to get in. The concept is we store the private key the
administrator store the private key in the PC in the local computer. The public key is stored inside the Linux instance
or Linux computer. So the administrator is able to get access to the public uh is able to get access to to the Linux
instance or Windows instance by providing the private key at the time of connectivity. Okay. So I just put it
over here so that there's no confusion. This is the same AWS cloud we have right. This is the
EC2 instance. It can be Windows or Linux. It doesn't matter. In this case, this instance is running upon it. It it
is it it has it has stored the public key. Right? As an admin, you are the admin.
You have a local machine, a PC or or a computer. You have stored a private key inside it.
So you have the copy of a private key. Now at the time of connecting because you want to connect right you want to
connect for authentication reasons you have to provide this key. This key needs has to be provided.
So once you provide the key the contents of the keys be matched with the public key. the the contents of the private key
are matched with the the public key. If they align with each other, then you're able to successfully connect to the
instance. Let's suppose you're let's suppose you're using a wrong private key,
right? In that case, it will disconnect you or stops you from connecting to the
instance because you're using a wrong private key. So it's a kind of you can say that you have to provide the right
key at the time of connectivity so that you authorize yourself as the valid user the administrator who's
trying to connect. Okay. So this is the mechanism used to authenticate. Authenticate means that
you're the right person to connect. This is exactly the stuff. Okay. All right. So we discussed about the protocols and
ports we're going to be using to access the estasis of the servers and to authenticate we use the key pair
authentication methodology. What's that called? Keep pair. Why it's called key pair? One key is with you
other key is in the instance. That's that's why it's called the key pair. All right. This is what we have to use.
Okay. So this is exactly the concepts. All right. So we can get started now with the hands on and this is called the
AWS console that you can be using to perform all your deployments. Every single deployment has to be done using
the AWS account. Now to get started, first thing first, if you are by mistake have gone to some some other location or
service on the left hand side, you can just click on the ads icon and you'll be back on the homepage.
You'll be back on the homepage after that. Okay. And once we're back to the homepage,
I'm just putting off the just turning off the Google Drive. Sorry, uh the the chat box so that we can start now.
All right. So in this case, we need to start with the basics which is getting started with the Amazon E2 Linux
testers. And the very first thing is is is that we have to go to the E2 dashboard. Now how to go to an E2
dashboard? You will see a list of services on on your homepage.
You go to the search menu at the top. Now you can see the list of services in the recently visited section. Okay.
Now if you don't see the list of services you can click on view all services you can see the services like
this and you can see more than 100 services fine so these are the services that you can see on AWS
and on the left hand side you can see there is also a kind of grid you can just go to services and you can see the
list of services based on different categories but it's very tough to just find you
your specific service list there's a search menu at the top you can just search for the exact service you're
looking for and it will show up. So for examp I click on the EC2 right now and this
will just take me to the EC2 dashboard. Okay so we on the E2 dashboard right now.
Okay. So we are on the E2 dashboard. Now the next thing that you have to do is that so we have been step number one.
Let's do step number two. Okay. Step number one and step two. So once you discover the issue dashboard you you
land on this page from this is where you can deploy your stats and also you can deploy other resources like load
balances elastic autoscaling groups um EBS volumes and so forth. There are a lot of things we can deploy. Right now
we need to deploy instance. So we focus on that one. But one thing you need to understand that once you go to the two
dashboard or any service you can do that before or after you need to change to the region of your choice. Right now on
the right hand side you can see a list of regions out there. Okay. These are the regions that you can access to
deploy your instances into because right now we're going to our main preface is to deploy an instance. Now you can
choose any region of your choice but I would strongly suggest you to use North Virginia. Okay. Uh now people will ask
me the reason why why exactly you say North Virginia because there are few regions which has which have some issues
in terms of the free tier resources. See right now uh you guys are using a free tier account.
A free tier account consist of some free resources. All right. For example, is it test type T2.m
micro Mumbai for example it it doesn't have sufficient T2.micro free tier instances.
So sometimes you may bump up to an issue where uh you try to deploy a free instance in Mumbai and you get an error
message insufficient capacity. So there are few uh regions which are problematic uh which doesn't give you enough free
resources very when when you want to launch them. Using North Virginia will guarantee that you will never face
shortage of or or scarcity of sources. Okay. Use North Virginia for your hands down. Number two, staying to only one
region will ensure that you don't scatter your resources unnecessarily in all the regions.
See what happens is that when you start start launching resources in multiple regions at the time of removing them uh
you you you sometimes get confused. Okay. So if you have all these sources in in one single location, tracking the
resources and removing them uh becomes much more easier. See we are not working in a business environment. In business
environment everything is well documented. How many tests are running? What what's how many loop masses are
running? How many buckets are there in each region? How many database are there? Everything is very very well
documented. So of course we're not doing we're not documenting anything in this case. We're just uh using this for the
hands-on. So to make sure that you keep a track of the resources properly and you can remove them when the time comes
stick to only one region. Okay. So in this case put all your eggs in one basket only at the time of breaking
them. You know that you have to drop the entire basket. So choose only one region and preferably use North Virginia. You
will never face any issues. So once I once you choose North Virginia, this guarantees that now the instance that
going to be launched will be launched inside the North Virginia. Okay. So switch to North Virginia region. Step
number two based on the document let me know there let let me know once you have done this thing. Now once you start
launching your resources for example load balancers, autoscaling groups, buckets to create a cohesion or to
create a complete robust infrastructure you have to use that vision explicitly or exclusively deploy all the resources
inside that region. Okay fine. So use North Virginia for all your hands on this will make sure that you don't face
a shortage of the free tier resources. There's no scarcity of them. Plus sticking to one region will will
guarantee that you don't unnecessarily put the resources in other regions and at the time of deleting those resources
it becomes easier for you. Fine. Okay. So this has been done. The next thing is that I'm going to build so that you
become familiar with the with the with the entire web page. I share the step number three and four. Okay. Step three
and four based on the documentation. You can just refer to the same screenshots and uh the documentation. It's very
easy. Step three and four based on the documentation. In the middle of the page, it says launch instance to get
started. Launch an Amazon instance which is a virtual server in the cloud. You go to launch instance. Okay. And it says
launch instance. Click on that. It says name and tags. First thing first, you can put any tag over here. Let's suppose
you put it as demo web server. Now even though it's not mandatory to to
put any tag in this case, it's purely optional. But because it's a best practice to tag your server because
later on tagging helps you to name the servers, recognize them to perform automation and also to extract bill
reports. So initiate the process to launch instance and put any name over there. Demo server, my server, whatever
you want to just type in. It's a simple name that that you're going to just put over there. It's just for the sake of
naming it. You have to start with launching instance. You have to tag the instance right now.
Again this is uh tagging this is not mandatory but based on the best practices business best practices you
should always tag your resources. Okay what's the next thing? The next thing is application and OS images. Fifth and
sixth once you put a tag or you just name the server the next thing is the application
and OS images in a form of AMI. Okay, you have to decide upon that what is the software configuration of this test. Now
in this case we are using we are choosing the operating systems in the quick start you have the Amazon Linux,
Mac OS, Ubuntu, Windows, Reddit, uh DBM, right? These are these are the few choices that I get. I can click on
browse more AMIs to look for more AMIs. I can just go to the drop-own menu. I can see the entire list. Okay, right
now, okay, by by mistake I've canceled this thing. Right now, what you have to do is that
you have to just go to the Amazon the default value. One second, which is the Amazon Linux AWS. Now, this
is the Amazon's own proprietary built developed Linux based operating system or Linux distribution.
It says Amazon Lex 2023 AMI. It's a free tier eligible. Whenever you see the label free tier eligible, this means
that this is free of cost. You don't have to pay even a single penny for it. Okay, there are some of the details that
you can see. Most importantly, you have to see the AMI ID. Now, what is this AMI ID? AMID is nothing but it's the
autogenerated identification code for this image. Right? It can be same for all of us. It
can be different based on regions. So there's a high probability that if you're using North Virginia, the MVO
Linux AMI ID of all of us is same. Okay. So AMID is autogenerated value and uh when we start using templates
like for example cloud formation to automate the infrastructure, we have to provide these DMI ids over there. If you
don't use GUI the graphic user interface that we're using right now the console if you use um automation or CLI you have
to or the command interfaces then you have to provide the AMI using the AMI ids. So right away AMID is not very is
not important for us but for the future reference it is important that you understand this thing that AMI id is
quite important because when you start with the automation of the processes you started creating templates to automate
infrastructure these AMI ids need to be provided at that time. So every image has a has a unique AMI ID assigned to
it. What's next? You don't have to change the architecture. We'll just go with 64-bit x86 architecture. You can
also go go with ARM architecture. We just go with 64-bit x86 which supports a lot of applications.
STS type est time means that as we discussed earlier it is the hardware specification of this. They want a RAM,
CPU, storage. You can see a bunch of rest but we're going to be using going with the T2.micro
which is free of cost and you can see that it is a free tier eligible instance type. So this a free tier eligible
instance type is free of cost for us. We don't have to pay for it. The other three instance type T1.micro T3.micro
but we're going to accept T2.micro as a default system type. So basically you just have to look into the details.
You don't have to change the the AMI. Just go with the Amazon Linux 2023 AMI and just type S2.micro.
What's the next thing? The next thing is the keyware login. I'm talking about the step number based on
the documentation. I'm talking about the step number seven. As for the step number seven, what you
going to do next is that you have to click on create key login. This keep your login is the same concept that we
discussed. Which concept? This one. Private key. It is stored on your computer which has to be provided at the
at the time of gaining connection to the instance and the instance has a public key embedded or it is it stores the your
public key. So the time of gaining connection you have to provide this private key to your instance and if uh
if it matters perfectly fine if if it aligns with the contents of the public key then you're able to get access to
the instance. So to create a key pair there's a you can see that you can use a key pair to securely create your
instance right click on create a new key pair on the just right hand side fine so it says um key pair name you can put any
key pair name it doesn't matter whatever name you you just choose in your case you can just put any key pair name for
example I name it as let's suppose demo or let's suppose wp-kp anything I would say that you can put
any name over here. For example, name it as um Intel server-k I I have a lot of keys stored. So just
to avoid duplication, I'm using kind of a complicated name. You can put any name in the key pair. Okay. Now the thing is
there are two standards we use for the key pair type. RSA and E2519. We always use RSA because RSA can be
used for Linux and Windows both in the private key file format will not change this thing that I discuss with you
tomorrow because the format uh changes uh based on what what's the type of client software you'll be using. Format
means it's uh let's suppose we have the format of the images JPEG, PNG or we have format of the documents like docx
for spreadsheet we have file extension like x lsx for ppb we have pptx so it's a format of the key what's a extension
of the key in what format in p format or ppk format so basically what you have to do is you just have to click on create
new key pair and name the key pair that's We never ever change the key pair type
the industry standard is RSA which can be used for both Linux and Windows estis and as if now don't change the private
key file format okay don't change it what's next the next thing is that you can create key here and the key pair
will be saved on your computer if you go to download section if you go to downloads folder you will see that the
key is being saved the key pair is being saved. Okay, this is a private key, right? This is in printable encoded
archive. So basically this is the the the PM key that you have downloaded. This can be this is a private key. I'm
sorry. Yeah, in in PM format, you need to provide this key later on if you want to connect to the instance. Fine.
Okay. What's next? The next thing is you have to go for the network settings. Steps eight and n.
In steps eight and n what we doing next is that you have to go to network settings first. Now under network
settings you don't have to change anything. This concept is called VPC or the virtual private cloud. Okay. What is
this? We'll discuss this this thing in in depth because there's a complete section dedicated to VPC. A VPC is a is
a your own virtual data center on the cloud. In this case, whatever you're going to launch is is is inside your own
private cloud. It's a virtual private cloud. It's called VPC, right? Virtual private cloud. So AWS says that we give
you give you your own virtual data center, your own private cloud. And when you deploy resources in in that cloud,
those resources are not visible by other customers. Okay. So if I deploy my my resources in
this VPC, you can't see that because it is it is owned by me. So basically it protects my resources. VPC stand for
virtual private cloud. It's a virtual data center which we get inside which all our servers have been deployed. This
gives us security and privacy. Okay. I'm not touching about the networking components as of now because there's a
complete section dedicated towards that. The the important thing that you have to do is that you have to go to security
groups. Now what it says we're trying to connect to a Linux instance or we we're going to launch a Linux server. Now for
Linux we use SSH to connect. Okay. If I just go back to the diagram, if you just go back to this this this this uh this
diagram that I created this drawing, you going to be using SSH as a protocol to connect. This is what you're going to be
using. SS it as a protocol to connect. Okay. Now the thing is you'll be you'll be trying to connect from your machine
from your PC which stores your private key. Now your machine is on the internet or it's is it's in your uh data center
or in your company's network. This PC is getting an IP address. Okay. The IP addresses are unique in
nature. They're unique. No two no two public IPs are same. I want to say that only from this IP address unique IP
address someone can SSH or initiate this SSH connectivity. IP address basically recognizes your your machine on the
internet. Your machine is getting different IP. My machine is getting different IP. No two IPs can be the
same. I'm talking about public IPs. I'm we'll discuss public IPs afterwards. The IP addresses which you machines get are
completely different from others. They are unique. So I'm saying that only my admin's IP address is eligible to
initiate the SSH connectivity because you're going to be using a computer, right? Only your computer's IP address
is recognized to get access to this via SSH to connect to it. So in this case what we're saying is that allow SS
traffic from anywhere anywhere is 0.0.0.0.0 this is a security threat. This means
that any machine on the internet can get SSH access to my instance. You can see that rules with a source 0.0.0.0/
allow all the IP addresses to access your instance. So for security reasons we choose my IP
right now what we will choosing my IP. This will pick the IP address of your computer which your computer is getting.
So that only from this computer that you're working right now you become eligible to connect to the instance. You
can choose a custom IP range but I'll choose my IP. Okay. So this will automatically ensure that only your
computer's IP is eligible to initiate the SSH connectivity to the instance. Then what about launching a website upon
this distance for for accessing the websites we use two protocols HTTP and HTTPS
right when you go to intellipad.com you or google.com or facebook.com you use the HTTP or HTTPS protocol right so if
you trying to host a website you can you can check both of them if you uh now HTTPS is a secure version of of HTTP but
it needs certain permissions. We choose both of them because we're going to deploy a website upon distance instance.
Okay. So basically you don't have to change any one of the network settings on the top. You have to configure the
firewall. You allow the SSH from your computer's IP because you as administrator only from your computer.
It's only your computer from where you can access the instances operating system. Fine. Plus you if you want to
deploy a website upon the instance you allow the SSH uh sorry HTTP and HTTPS traffic coming from the internet. What's
the next thing? The next thing is that you don't have to do step number 10. Straight away go to step number 11.
What I've done is that I given you three documents out of which there's one document which is a script. The name of
the document is this one. Okay. Now in this script we're going to be using to install the website or
Apache web application upon instance. This script will automate the web application deployment upon instance.
Now this is a bash script which will run some updates upon my instance. Install the HTTP. Now HTTPD is for the Apache
package. Start the service. Change the directly to the HTML folder and configure the index HTML page inside it.
Right now you have to copy this entire script as it is. go back to the last category that's called advanced details.
And under advanced details, you have to go to the last category which is which is user data. You need to paste it under
user data. Okay. What's the next thing? You have to straight away go to the 12th
step. Step number 12, you need to just go back and on the right hand side you can see a summary.
Click on launch instance and instance will get launched. On the right hand side you have a
summary table. What's next? The next thing is testing. Step number
13. 13 14 and 15. You need to go back, click on instance ID.
Now this is your instance that you just got deployed. Your instance gets a unique instance ID. There's a check box
which is on the left hand side. You need to select it. Once this once you make the selection,
you will see that there is a tab that is called details. Under details you have this thing called
public IPv4 address or it's a public IP4 DNS name right this is basically used to access the u your instance over the
internet if you're using internet as a medium and let's suppose you want to access the website on it or maybe you
want to SSH to the instance then you're going to be using the public IP address so public is internet what to do next is
that you need to just go Go ahead and copy the puppy. There's a copy icon. Click on this copy button and paste that
in the browser and you will get this thing displayed. So there's a simple HTML uh page we have
deployed and this shows us the result. Now if you if you're not able to browse a page, make sure that if it's showing
you HTTPS by mistake, remove S from HTTPS. Just in case if it says page can be
displayed. Make sure that by mistake you're not trying to browse the https website remove s at the end of it
because to uh configure the https we have to also configure the ss certificate. We haven't done that. Yeah
great. Yeah by default it it got http but sometimes it it gets to https based on browser settings.
Okay. So this is a simple website. You just have just deployed upon all the instance using a script that I given to
you. And uh this is a simple procedure that you have to follow along to deploy your instance in AWS cloud.
Okay, great. Now guys, once you're done with this thing, before you log out, you
terminate this tense. Terminate means that you don't need this this anymore. You want to get rid of it. Now how to
terminate the instance? Now either you right click on the instance you have the options from where you can just stop,
reboot or terminate the instance or you choose the you save the the web server go to test state you can stop it. Stop
means you come back and restart it. Reboot means you can reboot its operating system and terminate means
that you're trying to kill it. You you can't get it back. So either you right click you can see there's a terminate
option or you just go to this test state and click on terminate it says terminate instance are you sure
want to do that you click on terminate and this will lead to the termination of the instance this I think procedures is
laid out in step number 16 17 based on the documentation step 16 and 17
fine we discussed briefly about security groups and um I'll uh I should just deep
dive in inside that uh or dive deep into that concept and make you understand exactly what's the purpose of security
group is all right so basically we want to understand this thing is that security of our resources is uh our
priority right when we go to cloud and we start launching our resources into the cloud uh our applications websites
um um databases uh backend servers processing servers, whatever you want to launch uh inside
cloud. We one of the priorities that we have is security, right? Priorities means that uh in in terms of priorities,
we mean that we want to make sure that we have all the things achieved and we get the same c sense of uh reliability,
security, high availability as we would expect in our own data centers. Right? So you're shifting your entire
application and services and you're putting them on the cloud. Now you want to achieve the same st of same uh type
of security, reliability, performance, right? So you have expectations in in those areas. The question that comes
into this into this entire place is that what are the security mechanisms that we may employ or we can engage so that our
instances and the data can be can be easily protected. The very first thing that you need to understand is that in
terms of the the resources provisioning and security. One of the first things we can use is the keys, right? Private key
and public key. The concept behind private key public key is that you have the private key saved on a computer,
right? So in this case only that that individual that has a copy of that private key would be authorized to
connect to the instance. Connect means to get the remote operating system access. You can get into the machine,
get to the shell, get to the operating system and you can perform installation, configuration, patching, maintenance. So
you want to do server management, you want to get the root shell or OS access. So if you have the exact copy of the
private key saved on a computer and you provide that private key at the time of uh gain connection then that case you're
being authorized to gain that access. This is one of the security mechanisms that we use. Fine. The second thing that
we we use is the protocol and port. This is the second level at which we apply the security. First level is keys.
Second level is that we apply the security uh on two protocols SSH and RDP because remember if you want to give the
if you want to get the admin type of access admin type type of access to a server or to a remote instance for Linux
we use SSH which stand for secure shell it uses port 22. So to get a remote access to Linux server you can be using
SSH secure shell port 22 request being sent towards instance and for uh the windows we make use of remote desktop
protocol or it's called RDP which uses the port 3389. Okay so protocol and port this is what what we're going to be
using. This is what we're going to be using to gain access to Linux and Windows
instances respectively. This is these are the two protocols SSH port 22 and RDP port 3389. This is what we're going
to be using. These are the two uh protocol inputs that we shall use. Fine. So you you also need to make sure
because this this these two protocols give you the administrator access, core server access, shell access, root access
to your server. You need to make sure that it's coming from um the water type. I would say known IPs,
right? So, it's not that anyone can send the SSH traffic to the Linux instance and get access to it, right? Uh these
two different types of request or protocols need to be restricted. This means that only handful of people I I'm
aware of should be able to initiate these two connectivity options or connections towards my instances. If
there if let's suppose there's a there's an there's a hacker who wants to get inside my server he he or she tries to
use SSH RDP image JT because uh the IP addresses that I've listed the IP addresses is not there in the list they
will be blocked immediately right so apart from the private key I also make sure that the SSH and RDP comes from
known sources from known people okay so that's where the security group comes into this entire entire conversation
what is a security group let's try to understand let's dive deep um into this uh this concept and try to understand
that so what is a security group okay so a security group is nothing but
it's a virtual firewall It's a virtual firewall uh which
restricts uh the incoming and outgoing
traffic. So basically this is a virtual firewall is a security group is nothing but it's
a virtual firewall through which you can impose restrictions in terms of the the traffic that's coming in or it's going
out from distance. What do you mean by that? So for example, this is uh my AWS cloud,
right? This is the Amazon web services cloud that I have and um this is an EC2 instance that we are
running. This is an EC2 instance which we are working on. Fine. Now I want to impose restrictions in
such a way that any traffic that's hitting my instance right so if if a traffic is coming from
outside and it's reaching instance from the instance perspective which what type of traffic we are dealing with
right um it is incoming it could be inbound. So these are the
three terms but the meaning is same or it could be something called increase. Okay. Incoming traffic, inbound traffic,
increased traffic. That's the traffic that's coming inside instance. So the traffic is coming from outside. It's
hitting the instance. Think from instance perspective. If you are the instance and traffic is coming and
reaching you from your perspective that traffic will be the incoming traffic, the inbound traffic or the ingress
traffic. These are the three words with with the same meaning. Okay. What if the if the traffic is leaving the instance,
the instance is given response back to the user. If the if the traffic is being it's been it's left out, it's being sent
out from the instance. This is the address traffic. Let's say it's let's outgoing
outbound what's called egress or egress right this is leaving distance this is
basically distance is responding and the traffic is going out from instance. Fine. So, agress outgoing um or
outbound. So, these are the two type of requests that we want to restrict. Okay. So, a security group basically is a
virtual firewall which performs restrictions at two levels. Number one incoming, number two outgoing.
Fine. Need to we need to perform restrictions. Now, to perform restrictions, we use three filters. What
are the So, what are the those filters we're talking about? filters that we use during our school days. Now we used to
uh do the experiment of sedimentation uh condensation, filtration. Now we used to to we used to have a beaker of glass
and then we used to have a sand paper or kind of a a bloating paper and we used to uh the the the water has used to have
a lot of mud and sand inside it maybe the sand particles and then if we just put that on a filter paper on the sand
or the that bloating paper or sand paper then that case this the sand will be filtered out and the pure water goes
inside the beaker. Okay. So that's we use that filtration process the same filtration process we we we kind of
filtration process we apply over here we saying that we filter through only the only that traffic which we want. Okay.
So there are three types of filters that we applying. Number one the the very first filter that we have is
the protocol. The second filter is port and the third filter is IP address.
These are the three filters that we make use of. Okay. Uh protocol, port and IP address. These are three filters we use.
Based on these three filters, we restrict the the traffic that's coming in or the traffic that's going out.
Okay. Protocol, port and IP address. These are three filters I will make use of. I'll give you examples. Try to
understand those examples. Uh I'll give you three scenarios. First two scenarios are simple. Right? So uh I'll give to
two scenarios. Uh first two scenarios and then I'll merge the first two scenarios and put that in a bigger
picture or bigger perspective in a third scenario. Let's start with scenario number one.
Try to understand it. It's a very simple and uh even though it is very simple concept, it is it is equally important.
Scenario number one. Scenario number one is that let's suppose uh I want to have the the users
access my web application. So this is accessing or you want to allow users allow
your end users to access your web application. Okay, this is the
scenario number one. What do you mean by this this thing? Let's try to understand and put that
into scenario. Okay, yesterday we deployed a website upon instance that website was was
accessed using its public IP. Now if you would have given that public IP to to any other person maybe if if you if you
have put that public IP in the chat box other person will also be able to gain the same access to your website. Okay.
So what is happening behind the scenes? Behind the scenes we'll just take the same example over
here. Behind the scenes we have the AWS cloud right? This is an easy instance that we
have right which is running the Apache PHP web application upon it right we
deployed uh we would like to deploy an Apache PHP
web application. All right. So this is the uh the the application running a bomb instance. Now
I want my customers who want to access my website. These are my end users. These end users would like would need to
access my site. So what they going to be doing is they would be sending they will be
sending these two types of requests right the first type of request that they would be sending would be the HTTP
request. So HTTP right which uses port 80. These are standard ports. uh these are
already being set for us. We use them. All right. Now, if you want to make a secure access to the instance,
then we use HTTPS. O SS stand for secure. This is this gives you secure HTTP access.
It uses port 443, right? Port 443. So if you want to allow the the front-end users, the customers
who want to uh go to your website, see your products and services, buy items, stream videos, these are the two uh
protocols and ports that you want to want to allow, right? And um in this case once the the instance gains the
incoming request it's it sends it back to the end user in the form of response and the response would be HTTP.
So basically it will send the response to uh the end user in the form of HTTP or HTTPS
outbound right so that response will be sent back to the customers fine so that the these these are the two levels at
which we apply the protocols and ports. What about the IP addresses? See, I don't want to restrict anyone from
accessing my website. Today, for example, intellig.com has its website running. You have the facebook.com,
Google.com, Instagram.com, Microsoft.com. Anyone from any location from any
country from uh any uh location can access these websites. How? Because in that case, we define the source IP
from where this this traffic is coming from. We use a source IP as 0.0.0.0/ 0/0
this means that any IP this means that any IP or from anywhere okay this means any IP this means that any customer
who's getting any so basically what do you use your devices your smartphones your tablets your computers laptops your
devices get the IP address for example if you I'm I'm talking of the people who who are from non background if you uh
either type IP config or command prompt or just type in this. For example, just go to Google and just type in my IP. Uh
it will show your computer's IP, right? What's my IP address? If you just go over there, it will show you your
computer's IP, right? So, you can also do run IP config on your command prompt or status type what's my IP address.com.
It will show you exactly what is your IP address that your machine is getting, right? Whether you go to whether you use
your your smartphone and you just access the internet use a computer or a laptop or a tablet whatever uh device that you
run if it it is on the internet it can access internet connectivity then in that case what happens is that it gets
an IP address fine now when I just put in 0.0.0.0.0/z /z it means it doesn't matter what's the
IP it is coming in please allow that those users to access my website by sending http https request so anyone is
welcomed you don't want to restrict anyone to to uh or if it's a if it's a internet site your company's website
which is only be used by internal employees that's that's another thing but in terms of websites which are
public in nature amazon.com ebba.com, uh, flipkcart.com, facebook.com, microsoft.com, intellipad.com,
they have to allow the incoming http and https request from all the sources. It doesn't matter which IP address your
computer is getting from your computer. You should be able to send these type of request to a certain uh web application.
Okay, that's the uh incoming request. What about the outgoing? Of course, as outgoing, we send back the HTTP response
or maybe HTTPS. Still, it's HTTP only. In terms of outgoing, we're saying the destination address, the destination,
the destination IP is any IP with 0.0.0.0/ any IP.
So, send it back to any. So there's no filter I'm applying. I'm saying please send it back to the customers who who's
uh so basically I'm saying the request can come from anywhere and the request can can be sent out anywhere. So this
means I'm opening this this in terms of IP addresses. I'm not performing any restrictions and this is my requirement.
If I block the IP address uh I I only allow certain IP addresses uh to reach my website only those IP addresses can
do that rest IP addresses would be blocked automatically which is not a favorable thing in terms of uh the
public websites. Fine. So these are three filters we have applied over here. Let's go with the scenario number two.
Scenario number two is that we are talking about now in terms of scenario two. In scenario two we want to make
sure that we uh perform restrictions. Uh we perform
restrictions for admin access, right? We want to make sure that only
the authorized people can get access by SSH or RDP. So in this case for example I have this
this is the AWS cloud which is running. Okay this is the same AWS cloud which is operating which is running right now.
And let's imagine I have two instances, right? This distance is Linux and this instance is
Windows. Right. So I will put this thing in in the middle. Okay. Now I want that I need
my administrators my internal IT administrators to gain access to these SS and get access to to the operating
systems they get they get the root access the admin access. See right now for example you're going to let's
suppose I go to intellipate.com right I go to intellipate.com I'm sending the https request to
intellipad.com and see the website but this doesn't means that interpret is giving me access to the server on which
this website is deployed this website must be deployed on on a linux server maybe a windows server right it must be
running on a server it has it that server has shell operating system everything So intellipad.com is giving
me access to the website but not to the the server or the operating system on which this this website is is running as
an application. Fine. So I only I can only see the front end but I can't see the back end. I only see the front end
but I I can't see the back end. Now you must have heard about this thing in terms of full stack web development or
in the website deployment. There's something called front end and back end development, right? Front end means we
only see what we what we get access to. This is a front end. At the back end, the server is running this application.
So the front end is given to the users. I am the customer of Intellipad and Intellipad.com says we give you the
front end access but the back end access is with Var with our internal IT software people the architects. Right?
So in this case we get the front end access. What about the uh administrator access to the
server or I would say access in terms of uh getting access to the operating system shell so or server management so
that you can perform day-to-day patching up upgradation deployment and so forth. So having said that let's imagine this
is my office building. Right. Um, this is my on premise. This is my branch office.
Right now, inside this branch office, I have a bunch of employees that come into anyone one of these categories.
solutions architects, administrators, uh they must be for example developers,
right? They need to access to the server on daily basis, a frequently basis when whenever they want. Let's talk about the
Linux server first. They would like to initiate the access for the for the Linux server. The axis that they they
will initiate will be the SSH request which uses port 22. Okay. They should be able to send the
SSH request to the Linux server and uh the connection will be established. The question that releases in this case is
that from which IP address we I will initiate. I should not this is you should not allow
in from from business point of view you should never ever allow the the SS request from this
IP instead you should be allow you should allow that from known IP address for
example my branch office is given this IP address uh sorry 54.78.1.0 0/24
okay /24 means that it's a block of IPs blocks this is a block of two 256 IPs 1.01.1 1.2 1.3 1.4 1.5 1.6
So it's a block of 256 IPs. 256 known IPs. So I want
that only from these 256 IPs which belongs to my branch office anyone can can access to my Linux server out of
these 256 block IPs block of 256 IPs I'm sorry out of the the out of the block of 256 known IPs or a chunk of those those
256 IPs like 54.78 1.0.13 1.4 1.5.6 right up to 1.255
any IP falling is in this range can initiate the SSH access to my Linux server rest of the IP addresses will be
blocked automatically for example if there's a person with with a different IP address so basically this is a block
of 256 IP so it will start from 1.0 go up to 1.255 255. So any machine in my branch office which
falls in this range of IPs will be able to initiate a connectivity request to the Linux server via SSH.
I can also apply this thing to my administrators. Right? So these are the admins in the same branch office.
so they can send the RDP request which uses port 3389 towards the Windows assistance. So these
are the admins in the same branch office. then need to send the request to the Windows server. Fine. So in this
case what we do we ensure that we restrict these two protocols. SSH uses port RDB uses port 3389
and we initiate we just restrict these two IPs. The list of two protocol and ports SSH uses port 22 RW port 389.
These are restricted protocol and ports which should be allowed from a bunch of known IPs. Fine. You can't allow access
from any IP for these two protocols and ports. Okay. So that's how you perform the restrictions to make sure that the
SSH is and RDB is allowed from a bunch of nonIPs. And in terms of HTTP and HTTPS, you can allow the traffic from
anywhere. Okay, let's merge these two two scenarios together. Let's merge these two scenarios together.
So in terms of these two scenarios, let's go with the third scenario. Scenario number three.
This is this merges web application or
admin and website access both. This allows the requests coming or
reaching my instance in two different at two different levels. You get the admin access and the
website access both. This is the scenario number three we're going to be discussing.
Okay, having said that, let's go with this thing. This is the AWS cloud that we have.
Okay, and inside this AWS cloud, we have this Ethernet test running. This is a Linux instance.
Upon this Linux instance, I have the Apache PHP
web application. Right now, what I will do is that I I will just uh have two categories of
users accessing this instance. Category number one, these are my end users
who will be sending either the HTTP request over port 80 and HTTPS request
or port 443 from anywhere. Okay. So they will be sending the request from any IP.
The second category of the users would be my would be my Linux administrators in my branch office.
So this is my let's suppose corporate office. it it is being assigned an IP address of
32 uh 15.1.0/24. I have a bunch of Linux administrators
who are sitting in this office and they need to get the the access by SSH.
So they'll be sending the SSH request or port 22 from this IP address range.
Okay. So I merged these two scenarios together. I merged these two scenarios together. In the first scenario, we we
give HTTP and HTTPS access to the end users. And the second one, we give the SSH and RTP access to the adins. So this
is a Linux instance upon which I' I'm running a simple web application. So because of that because of that I want
to make sure I want to ensure I want to make sure that these two different categories of users can get access to my
instance at two different levels. The front end access is given to the end users and the back back end access is
given to the Linux administrators in my office. So this is through this method I apply security for my servers or
instances or of co and of course uh these administrators will be using the private
keys for further security. So they will have the copy of the exact private key so that they can
authenticate that they are the authorized user to gain connectivity by SSH. Having said that, this is exactly
what a security group concept is all about. Fine. Okay. I'll also show you uh basically uh
and also show you the hands-on of this one as well. Okay. Let me show you the implementation
of it. Let's imagine. So you can create groups in two different forms. Either you could just apply that when you
deploy this test. For example, I go to EC2, right? And uh once I go to the EC2, I go go to launch instance, kill launch
instance. This is what we did yesterday, right? This is this is the step that we performed. If I straight away go to uh
this thing called firewall security groups, you can see that a security group is a set of firewall rules that
control the traffic for instance. So you can click on create security group or you click on select the uh the existing
one and you can see the existing security groups in the list. So I click on create and uh once I uh choose it
this is the name that it will assign to it launch visit 56. Okay this these are default uh names and uh I'll just pick
it up and this is for where I can say allow SS traffic from anywhere which is not recommended. Now for the handsaw
that's okay but from business point of view this is not okay. So I can choose custom source. For example, I can define
the IP address of my corporate office. Okay, I can do that. This is the IP address of my branch office or my my uh
u corporate office and any IP falling in this range would be eligible to send the SSH traffic to my instance. else I can
choose my IP and automatically it picks the IP address of my computer. If you see that 103.215
this is the same IP address which my computer is getting. If if I just go to what's my.com
this is the same IP address which my machine is getting. Okay. So this means that only from this machine I can send
the SSH request to my server or to my instance and if I deploy an application I can allow HTTP traffic from the
internet and even the HTTPS. Okay. So this is the first way through which you can go ahead and this is one
of the ways to one of the two ways through which you can configure a security group at the time of it test
deployment. That's the first way out. The second way out is that if you go to something called EC2 dashboard, you can
see a bunch of options laid out. I'm talking about uh the thing which is called resources. This one under
resources, there's a uh there's a a separate section or or option for secure groups. You can just go over there or
else if you just go to the left hand side uh in the navigation menu you can see that we have the security groups
laid out over here. This one and this one. Okay. If I go to any one of these two places, I'll land on the same page
which is the security groups page. Okay. I will just go to the same page. I will just go ahead and click on security
groups. Fine. I already have a bunch of them. Let's
suppose I want to create a new one from beginning. Now when I create it from beginning, I can apply that at at the
time of is just launching. So I can choose select an existing security group and choose anyone in the list. So for
example, I click on create a security group. The very first thing it will ask me for the name. I can put in any name
over here. face. Just type in Intel uh uh Linux web server. Now, this will give me the
detailed over detailed uh configuration of the of the security group rules. I'll show you that I can put something in the
description. For example, this allows uh SSH um my SQL
and HTTP access HTTPS access. Okay, https
HTTP and HTTPS access. So this is a description basically this just for this
is just for the documentation purpose. So you put a name, you put a description basically in the description. This is
for the documentation. You just type in exactly what you're going to be achieving through this configuration,
right? You put a name, you put a description, and don't go with the VPC as of now. We we go with the default
value. VPC concept. We'll discuss later on. But the VPC means that it's a it's a small network. It's a dedicated network
for you. Okay. We straight away head on to inbound rules. I click on add. So, so there are two
categories inbound for the incoming request or ingress request. This is the inbound and this is the outbound for the
outbound or aggress or outgoing. So for example, I go to the inbound rules. Click on add rule. Okay. Now once I
click on add rules, I can add multiple rules out there. I just go ahead and start adding the first rule. For
example, if I go to the type, I can choose the the list. This is the list of protocols I have. This is the list of
protocols I have. Can you see that? It's a complete bunch of uh it's a complete list of the protocols. Right? So when
when I deploy this instance, I can only choose SSH, RDP, HTTP, HTTPS. But now once I create a new one from beginning
separately, I can choose a lot of protocols in the list. For example, I choose HTTP. Now once you choose HTTP
automatically it picks the port number 80 right and um in the source it will ask me from which IP you want to allow
the h uh HTTP access. So is it my IP? You never do that. You never allow the HTTP from a a known IP. Basically you
choose either anywhere IPv4 or anywhere IPv6. You can choose any one of them. Generally we just go with anywhere IPv4.
It is 0.0 0.0.0.0/ description. This this allows this this allows the end users
to access my website. Okay. So you put a description so that you can easily make it more presentable. Uh you configure
things properly. Basically uh when you start configuring the things in a in a production environment in a production
environment then in that case in that case what happens is that in in a production environment you you want to
make sure that everything is well documented because if you're including a rule you
should be putting in that exactly what that rule means what exactly you're going to achieve from that specific
configuration that you're going to apply I can add another rule for example I can choose https from anywhere via IPv4. So
I want to allow the end users to access my website securely because this is the secure
version of HTTP. Then for example I can choose let's suppose SSH from custom source. I can choose my IP for example.
So I would say restricted let's suppose I choose u a range a custom range. So I choose custom. I choose 54
uh 67 uh 1.0/24. So restricted
restricted SSH access from my branch office.
So only the people in my office will be able to access my website or the web server or the the backend server. the
operating system of it the root access I think I don't have to allow MySQL we'll just remove okay now let's imagine
that you're running a MySQL package upon this this instance MySQL database on the same instance so I can I can just look
for let's suppose MySQL I'm running a a kind of a database package a database upon the same
instance so I choose MySQL and Uh I also want to I want to restrict it. So I can choose the same IP address
54.67.1.0. So this is the restricted database access
my SQL access from my database admins. Right? So I'm saying that only u let's
suppose I'm running a myq database upon this this uh this Linux instance and I I want only my database administrators to
gain access to it. So I can restrict those IPs because MySQL is also kind of restricted IP. You don't allow the MySQL
access from anywhere. Okay. And this is just an just an add-on. So in this case these two I these two protocols are
open. Generally these two protocols are open. HTTP, HTTPS these are open um ports and protocols you lock from
anywhere. Rest of them most of them are restricted including SSH for Linux, RDP for
Windows, even MySQL for databases. These are the these come under restricted categories. Fine. So once I do that what
about outbound? Outbound basically we don't change it because in terms of outbound we don't have any security
threat. It's the incoming which poses security threat not outgoing. So our point says all traffic includes all
protocols and port wedges to all the destinations. Now if you want you can apply the restrictions over there. Okay.
If you want you can perform the restrictions uh out there but I'll not do that. Fine. You can you can
officially tag to this thing but tagging is not that much important in terms of security risk management.
So you understood what we are doing. We are trying to say this is my Linux web server which allows the SSH MySQL and
HTTP access. SSH MySQL HTTP and HTTPS access. And I have opened these two protocols from all IPs.
Anywhere can access my website. But I want to restrict these two protocols because these are uh they give the
special access. It's a privileged access. For example, the SSH access is given so that the users can s uh list
administrators. They can perform something. Uh I want to uh run some uh some database packages. My let's suppose
I'm running a database. My database administrators need to gain access to the database running upon this my this
um let's suppose I'm running server with MySQL package running upon it or a DBN server a DBN Linux assistance. So uh
there are few things I need to do which for which I need to open this MySQL port. So I can say restricted MySQL
access from my database administrators and that's it in terms of what I haven't touched it. All right. So that's about
this stuff. This is what exactly uh the type of confusion I I'll be applying. I click on create security group and this
creates my security group Intel Linux web server. Now at the time of this test deployment I just go ahead and apply the
same one in front of you. So uh I just go ahead and for example I click on select existing secret group under
network settings and look for the one that I just got created until uh this one. This one. Yeah. So I can
choose select existing security group and apply the one that that I just got created. Okay. This is exactly what you
need to do. You can either create a new one but you don't get a lot of options out there. You just allow you just in uh
have these three rules in place. But if you create a security group separately, you can choose a bunch of options, bunch
of protocols and then at the time of this deployment, you choose select existing security group and apply the
one. You can just search for that in the list and then apply the same one that you already have created beforehand.
Fine. I'm going to give you a document a text document which consists of okay uh either I can just give a text document
or this is the exercise it's already there in the Google drive and uh if I copy and paste the content of the
content of this text right now this is the the diy exercise diy stand for do your own do it on your own okay diy
exercise okay this is a scenario so I've already explained you exactly that how a security group needs to be configured.
Fine, this is the scenario that I've laid down in front of you. Okay, this is the scenario that's being in front of
you and uh you have to go through the scenario, open your as console,
try to configure a new security group on your own based on the scenario. All right.
All right. So let's go to this statement. It's a D I have exercise. Let's do do your own exercise.
So based on understanding and and um based on what you know already know you have to apply these steps. It says
create a security based on the following requirements. The first requirement is that you have to go for the inbound
rules. You want to allow the users to access the website running upon this test in such a way that they can access
it safely and the communication is encrypted and quote unquote secure. So you want the users to access the
website. It can be a Python website, a NodeJS website, Apach PHP web application, whatever you want to
deploy, you want to give them safer access. Okay. So in this case I go to the security groups
on my instance dashboard. So basically if you go to dashboard you can just click on security groups on the
resources under resources menu or you can find resources or sorry you can find security groups on the left hand side
under network and security. You can just go to either of these two options. You'll land on the same page. Now you
can create security group and you can put any name out there. For example, u demo
intel web security group. I just put it as sort
for which stand for secure group. So I can put some description for the documentation purpose. um allows
uh access for my end users and customers and the and the um internal employees.
Okay. So I just put a simple description. This is just for the documentation purpose. Access to my end
users and the internal employees. Okay. Let's start with the inbound rules.
Let's start with the inbound configuration. Based on this, it says allow users to access the the website
upon instance in such a way that they can access it safely and the communications is encrypted and secure.
Now, whenever you want to give a safer website access, you always use https https right now you go to intellate
site, it's https only. All right? So you choose anywhere IPv4 and you just say uh allows uh htt
secure access to my website right you uh choose http uh as in in the front end what's
the next thing the next thing that you have to do is that you have to it says as this is a windows server your admins
should access it from the branch office. This is the IP address already paste it over here. So I'm saying that
let's RDP. So RDB is for the Windows server and I paste the IP address over there. So I say restricted
restricted uh restricted SS sorry RDP
access from my internal employees. the people who are working for me uh the
windows administrators. So for my internal employees in the branch office.
Okay. So you restrict it. Now the third one say that this window service is also running a myq database. You want to
allow the internal access from the same branch office and a bunch of internal private IPs. Now this is a bit confusing
out there. It says this is a Windows server also running MySQL database. So it is very obvious that you add a rule
and you just allow the MySQL access which uses both the 306 allow access internally from the branch
office. I already put the branch office IP address over there. I choose custom. So I say restricted
my s I'm sorry my SQL access from
my branch office.
Okay. The next thing it says and a bunch of internal private IPs. You must have some internal servers that
need to communicate with the MySQL database. Maybe there are some threaded web application server, some some
processing servers need which need to send uh some transactions, some data to the
MySQL database. So you need to copy this IP address and you have to add fourth rule
MySQL from custom source and you put the IP address. So restricted MySQL access from
some of my internal servers. In the end point you can allow the this the same protocol multiple times for
different types of for different set of IP addresses. I'm saying this is the MySQL access from a branch office. I put
the branch office IP address and put it across. Also there are some bunch of internal servers that need to access the
MySQL database to send some transactions or some data to towards it. So I allow the MySQL access from a bunch of IP
addresses from the internal ser which belong to one of some of the my internal servers.
Okay. So if you have different categories of IP addresses different sets which can be grouped together you
can put it across in two different rules or multiple rules. So the same protocol can be applied to in multiple rules if
you have different types of sources. These are the four rules that we have to apply in terms of outbound. I'm saying
that uh keep it to default. Don't um do it anything. Go ahead and click on create. Right? So this is the one that's
how you have to create it. This is what the concept is fine. But there there's one important thing I
need to discuss with you before we can move forward. That is launch template or launch configuration. So I discussed
with you the concept of low balancer autoscaling what exactly uh the purpose of having a low balancer is because
basically when you go with the horizontal scaling and you want to deploy more instances for the same
application which supports the same uh infrastructure it is very obvious that you have to make sure that there should
be some component or any type of device whether it's virtual or physical which should be sitting in the middle I should
be uh diverting the traffic to the instances or routing the traffic out it will dividing the traffic between them
equally. For that we use load balancer right so we discussed about the load balancer uh the health check uh
capabilities that it has and um the downside of the of the rob we discussed was that it has no capability to do
deploy more stresses if there's a scarcity of the of them. So if you lose for example one instances or two
instances and you just have one or two instances left out then load baselor will divert the entire traffic to that
leftover instance okay or leftover instances. So in that case you'll be uh putting those stes to exhaustion you put
will be uh sending a lot of traffic to those leftover stes. So to compensate for that autoscaling can deploy more
stresses automatically. It will see the difference between the desired capacity and the actual capacity and based on
that it will launch additional instances for you to compensate. Right? So whatever lesser amount of instances are
there in in the infrastructure it will deploy the same number of instances for you. Then we discussed that the autos
can also increase or decrease the number of stresses based on the principle of elasticity based on the time of day
patterns of the incoming traffic um demand. It can increase or decrease the number of stresses. They make use of
scaling policies as well based on which um for example you have the metrics like CPU utilization the number of requests
coming coming and hitting a load balancer number of packets coming in or bytes
coming in bytes going out these can be the metrics or the parameters based on which we can have the autoscaling to
increase or decrease the number of stasis autoscaling can launches it's which comes under the category of scale
out action autoscaling can also terminate the stasis which comes under the category of scaling. Actually in
both these two cases autoscaling will make sure that based on the metrics we have defined or the time of day or
certain dates it can automatically scale out which means that increase number of instances launch them or scale in
terminate instances or subtract them from your infrastructure. Now all these operations have been done
within the autoscaling group which is nothing but it's a logical collection of identical targets or identical stresses.
So you group these stases or the targets together in a single group. Right? So I give the example that uh the low mass is
getting a lot of traffic. You can just uh choose this metric that the low bass request count per target uh or per
instance is more than 100 request. So on an average the low pass is sending more than 100 requests to every instance in
the group then that case autoscaling can increase more instances and once the amount of requests go below a certain
level autoscaling can terminate the additional instances which are no longer required. Okay. So based on these
metrics as well autoscaling can function for you and it will automatically increase or decrease the number of
instances. We discussed that. Okay. The thing that I that um we need to discuss u is that for example the autoscaling
has to initiate a scale out event. it has to launch the stresses right autoscaling there's a need or there's a
there's a requirement for autoscaling to increase the number of instances right so for example in this case
I just put this over here that autoscaling has to initiate a scale out event and launch 10 instances for you
the question that comes in into into this entire conversation is that if the autoscaling has to initiate a scale out
event and launch instances is how autoscaling will know the software the hardware configuration of those
instances. There should be something right. So if the for example autoscaling has to
launch instances inside the group right there's certain parameters that we need to deploy the instance certain
properties. So autoscaling says I would like to deploy some instances. So autoscaling has decided
autoscaling has decided as a service that I need to initiate a scale out action
or scale out event and based on that event I need to launch instances inside this autoscaling group.
So this is my for example my autoscaling group which is a collection of identical
instances right so autoscaling has to more has to launch more has to launch some additional
stresses inside this group but the question really is that who decides the AMI the Amazon machine image
right uh this test type, right? Who decides the for example keeper,
security group, etc., etc. For example, you have some storage settings.
storage for example EBS or EFS these settings need to be is need to configure them. So basically these
are some basic properties. These are some basic parameters autoscaling needs to put um across these parameters or
refer to these parameters and deploy the stasis this the the simes inside the autoscaling group. All right that's what
it want. So what you do is that you put this entire uh list of properties in a template on a
document. That document is called launch configuration or launch template.
Right? This is called launch tablet or the launch configuration. You can use either of these two things. They're the
same thing. Launch tablet gives you some more advanced options. Otherwise the purpose of both these two different
options launch configuration or the launch template is the same. These are the the these are the documents based on
which it lists down the parameters of or the uh or the properties which is an instance requires uh or we we need or
autoscale requires or needs to deploy the similar statuses inside this autoscaling group. Right? So we want
that in autoscan group all the testes have the same AMI same test type same keep pair same script group same story
settings. So we'll have to list the exactly these parameters autoscaling will pick will refer to these
parameters. Basically it will uh use this uh the document the launch configuration of the launch template and
based on that it will initiate a scale out event and deploy the simmerous testes with exactly the same AMIs test
type keeper secure group whatever properties you defined in the autoscaling group. Fine. So the launch
templates launch configuration uh need to be configured first before you can put autoscaling into action. Fine. So
this is exactly what we need. Now uh in this hands that we're doing we'll be using launch configuration. How to use
launch templates we'll discuss afterwards because uh when you start configuring launch configuration um
Amazon web services will or as will give you notification use launch templates uh they are recommended. Yes, launch
templates are recommended over launch configuration but you can use both of them. Both work perfectly fine. In
launch templates basically you can create the versions and revisions. Okay, that's the advantage you get. We'll
discuss launch temp as a separate topic. So in this hands-on we we'll use a launch configuration for autoscaling.
Fine. Public image. Okay. What's a public image? Public image is the one which are already available for all of
us. It's common for us. For example, if if I go to EC2 dashboard, I go to AMI catalog, right? For example, quick start
AMIs, marketplace AMIs, community AMIs. These are public images which means that if you go on your instance dashboard or
your your Easter dashboard and you just try to access the AMI catalog on the left hand side, these ex AMIs are
exactly the same. It's it's a it's a it's a public offering. I would say that everyone can use it. If you go to uh the
quick start AMIs where you deploy this test, Amazon Linux AMI is is same for all of us. There's no
difference right you can you can also use Amazon Linux 2023 MIMI or I can also use that so it's public in nature which
means that AWS offers these AMIs for all the account owners now the public AMIs are are are fit for hands-on for
development for testing but not great fit for business applications business uh use cases because your businesses
will be uh having their own customized ized images with their I would say with their own inbuilt software installed
upon or I would say they need to have their own packages. I'll give you one example. So for example, let's take one
example over here. Let's imagine you take an you take an EC2 instance, right? This E2 instance
is launched using a public AMI. For example, I'm sorry, let me just uh remove the annotation. Let's imagine I
use one of the public AMIs in the list. Let's suppose I pick Ubuntu 22. Yeah, this one Ubuntu server 22.04. Now, if
you see in your quick start AMIs, this is the same AMI. You can you can see no no difference. This is a basic plain AMI
which is visible on all the accounts. So for example if I launch this instance using Ubuntu
22.04 AMI public AMI. Now this is not been customized based on business requirements. This is a plain
image, a plain operating system. Okay. Now what you do is that you SSH to the instance. You SSH to the instance or you
run a script and you say all right let me install upon it. Let me install for example
uh WordPress WordPress upon it. MySQL so WordPress 5.1.1
MySQL 6.0 Apache PHP application right you install these packages and you
customize them. So you you customize all of them. You customize basically you're trying to
launch a web application. So you customize these packages based on your business. You customize them. You do the
customization. You do the customization. You customize the WordPress site, the MySQL database.
You have the entire data in that in that database. You run the Apache PHP application. So you install all these
bundles. Okay. Once you install all these bundles upon the instance where they will be
installed within the instance you customize you install all the software and and uh customize the software and
your apps based on your business requirements and then as a final product you fetch a private image out of it.
Right now once you get this private image where it shows up let me show you. So
it shows up under this category my AMI. It says created by me which a business creates for its own needs. Because a
business will have its own AMI. The advantage that you get is that once you have this this AMI ready for you,
okay, you don't have to manually log into every instance and every Ubuntu AMI instance and install the same software
again and again. This private image can be taken as um it it can be stored in my AMI space. And
from this private image can can help you to launch
identical stasis, right? And these statuses can be of any stress type.
So you can deploy identical stresses in thousands of any stress type and all these
stresses will have the these all these applications pre-installed upon them. WordPress, MySQL, Apache, PHP, they have
been customized automatically. So you don't have to repeat the process again. You understand the point? So you take
the public image, the basic common image, deploy the instance, install every customized software, business apps
and services upon the instance. And once you once you're ready, take that image from the instance. That instance will
help uh that uh instance from where you take the that image. Once you take the image from that instance, that image
will now help you to launch identical instances with exactly the same software. So all these things will be
carried over and encapsulated or packaged inside the private image. It will be
automatically packaged. Okay, this is what you have to understand.
Enough of talk. Let's start with the uh with the hands on guys. I show the step from 1
to 11. Okay, 1 to 11 or 1 to 14. Okay, see what I'm doing. This script is the same
script that we have used earlier. This is the bootstrap script. Now, this is this will install a kind of a customized
software for us on the instance which is launched using the public AMI. Okay, so we we're going to be using the script. I
launch this test first. I go to my dashboard and I click on launchance.
I name this I can put any name to this instance or I can just leave it blank. If you want you can tag it or you can
just not tag it. It's purely your choice. Now in terms of the application OS images, we're using the Amazon Linux
2023 AMI. Now this is a public image. If you carefully see on your screen on your ads management console this is the same
AMI available to you even if you match with the AMI ID if the region is saying North Virginia because it's region
specific even the AMI ID would be same which means that this public image is the one which is common for all of us
there's nothing private about it fine so this public image you have to use as a Amazon Linux AMI as of now to deploy our
first distance. You don't have to change this test type. Under keep pair, choose proceed without a keep pair. You don't
have to use the keep pair. You don't have to use a key here because we don't have to log into this test. Now, under
security group, under security group, you have to choose allow HTTP traffic from the internet because you have to
access the website. And under the advanced details, you have to copy and paste the script. Now this script will
install the Apache PHP web application for instance. It's our own customized software. didn't give this script
to us. We are we running the script on our own to install the Apache PHP website instance with the customized
index.html page configured. Fine. So we use we just copy the script put that under the user data.
Fine. And that's it. We just go ahead and click on launch test. This is the same concept that we've discussed
earlier. You have to deploy the instance. Click on this test ID. Wait for the instance to launch and then once
this test gets gets in running state, try to browse its public IP before address, you should be able to get a
result back as a web page. Now once you perform the first 14 steps or I would say once you deploy the instance copy
public IP v4 address and paste that in the browser and you will get this result. Now after launching instance
even after I would say 30 seconds or maybe let's suppose 60 seconds let's let's say give it a 1 minute uh buffer
time even after 1 minute the instance does the instance doesn't respondse with the this page then what you have done
you have deployed your customized business application for this test fine now we'll be taking the private AMI from
the instance we deploy this test using a public image installed this Apache PHP application upon it. Now we will take
the image out of it which will store for our future reference. It's kind of a templatizing the things right. Amazon
offers you uh free public templates which are not customized based on business requirements. In that free
templates you add on your own customization and make a private template out of it. That's what we're
trying to do. So we in the previous step we have just launched this tense and u we were able to execute this step
successfully and u we were able to get the this output. So we just installed the Apache application upon this test
using a script. Okay, what's the next thing? We need to get the image from the CS test now
because the C test has been launched just okay now in this case we want to make
sure that we can just go ahead and take the image out of the systems. So in this case
we are going to follow the steps from 15 step 15 only. Okay step 15 only. So in step number 15
what what we going to be doing? Let me show you that you need to take just choose this instance go to actions. So
you have to select the test first. You have to check it. Check it. Once you check your est selected, you need to go
to actions image and templates. And there's an option that is called create image.
You can see the the definition of the AMI. It's defines a program settings that that are applied when you launch
your instance. You can create an image from the configuration of an existing instance. Can you see this thing? You
can create an image from the configuration of the existing instance. So we already configured the Apache PHP
upon the exist the existing instance and now we're getting the image out of it so that the that image will have exactly
the same package included with it. So for example I name this image as HTTP image.
I put a description. For example, this image uh this image
will be used with autoscaling
to launch identical instances.
Right? I put some description over there. Now it says no reboot. So we have to
uncheck it. So whenever we um take the image from from an instance it it gets remove it it is rebooted
automatically and when we take the image from an instance it takes a backup of the image
files in the form of snapshot right what's you can see that during the image creation process easy to create a
snapshot of the above volumes. Basically this is when we uh the the data storage linked with instance we'll back up the
image files. So we want to back them up in a form of a snapshot. Fine. So in this case we want to back up all the
image files in a storage device that's called snapshot. It is a storage service. So snapshot will back up all
those images and the size of that backup is 8GB. Okay. This is what what we this will
cost us 10 to 15 cents. Okay, because I I already told you that for the image you have to pay 10 to 15 cents because
when you back up the uh the image files, they're being backed up in the form of a snapshot. SNAP S O T. So snapshot is a
backup of your data. So the backup of the image files of HGB will cost me 10 to 15 cents. I click on
create image and my image starts getting created. On the left hand side I go to AMIs and this is my private
image showing up. Can you see the visibility? It's private status is pending as of now. If I go to
AMI catalog go to my AMIs. Now under my AMIs this private image shows up and you will see that your AMI ID will be
different. This is the private image. This is not public. Now this is private to you. It's not visible from other
account owners. So you can you can customize the images. You can install um any type of applications upon your
statuses. Customize those application services or the pages and then you create the image from instance and as a
result as a byproduct you get the private image. Okay. What's the next thing?
The next thing is that we have to start with the target group and application load balancer. Okay. So we'll start with
the load balancer deployment. What we need to understand in this case is that if you remember once we start discussing
uh the concept of low balancer we used a very basic diagram and if we put it across basically what
we discussed in in this case was that the losser will forward the the request to a group of instances or
targets. Now what we discussed was that these what are these? These are the targets.
These instances are t okay what are targets? The targets can be anything. They can be EC2
instances. They can be for example um containers docker containers for example.
They can be for example um instances containers. These can be also
your onremises servers which the servers which are running in your own data center. You can also link that with the
with the with the um adop uh load balancer. So target scam instances your containers onremises
servers and there's something called lambda functions. Now lambda is advanced concepts I will discuss afterward.
It is basically a part of serverless infrastructure deployment lambda functions.
Lambda functions basically used to deploy your serverless website. So serverless applications.
So the targets what are what we what we're trying to do is that the load master will start diverting the traffic
to targets. Targets can be instances containers on servers and lambda functions.
Okay. Now in this case we are using the targets as instances. We're going to be using them. Now the thing is load
balancer will group these targets together and it will label or and and that would be called as a target group.
A target group is a group of similar targets or similar instances. Okay. If we group those targets together
or instances together in our case and the main thing is that we configure the health check based on the target group
right we we discuss about the health check basically the health check is being sent from to every target but it's
being configured at the target group level the health checks. So before we can start with the load balancer
deployment, we have to create a target group first and configure the health check for its targets. So how you go
about doing that? You need to just go on the left hand side. On the left hand side you will see that you have
load balancing and autoscaling. Fine. So we just go to load balance balancing. We have load balancers and
target groups. You have to go to target groups first and click on create target group. Okay. Now based on the steps that
I'm going to put down uh in front of you, the step numbers are the following 16
to 20. Okay. What's next? Well, once you start with the uh once once you start with the process to create a target
group, it it says choose a target type. Is it instances? uh are these are instances IP addresses these are lambda
functions or application load balancer what is this so for example we are using instances IP addresses can can can
belong to a containers on premises servers lambda functions are the uh this it's used to deploy serverless
application you can also link one load balancer with the other one as well we'll discuss that afterwards it's an
advanced concept but right now we'll just go with instances in our case the targets equates to instances we want to
deploy. Okay. And we're going to be using autoscaling to manage and scale our capacity. What's capacity? The
number of instances or number of targets. Now once I choose this test I Okay, hang on one second. I
just put cross one more time. Create a target group. I go with this test. I put the target group name as for example
HTTB target group. So I just put a simple name in this case HTTPD
target group. Okay. And I choose a protocol and port um HTTP because uh my target group will
will be receiving this traffic from the load balancer. I don't have to change VPC or the protocol version. Now this is
quite important. Health checks. Let's say that the associated load balancer will periodically send request per the
settings below. to register targets to test their status. Now what I convey to you earlier was that the low passer will
send an HTTP request by default to the targets at the back end and the targets will respond back to the low passer with
200 okay status message. Now if we set a health check path health check path means that do you want to ping any any
document any configured page of your application? If you if you if you carefully see the script, we have
already configured the index.html page of our website. So we we can use this as um a path and I'm saying that I'll be
sending my health check messages to the index.html page of my website and if my website is running properly, they should
respond. There should be a response in the form of 200 okay message. Right? So basically I'm saying please try to reach
the homepage the index html page of my website. So in this case what we're going to be doing is the health check
pingpad we're going to be putting in as forward/index.html. Okay this is case sensitive uh which
means that please don't use uppercase letter anywhere or any alphabet in uppercase. This is this should be lower
case only. I've seen people sometimes put capital I so it is lower case only. Put a forward slash index for HTML. Now
I can just go to advanced health check settings. Right now let's focus on this interval. It's a approximate amount of
time between health checks of an individual target. So there's a default request of 30 seconds which means that
the load balancer will send the healthy request to every instance every 30 seconds.
Okay. And the success code is 200 which means that generally if the instance is healthy it will respond back with a 200
okay acknowledgement message. That's a status uh success code. Right? So the interval is 30 seconds. It's a time
frame between uh the consecutive health checks. So every 30 seconds the the robustessor will perform a health check
towards the towards every instance. You don't have to change these values just for your information. So basically you
have to create a target group. What's a target group? It's a group of targets. The target I will choose instances. The
name of the target group is you put any random name. The protocol port will be HTTP port 80. And you have to configure
the forward/index table as a health check path. Go ahead then click on next. You don't have to do anything over here
and click on create target group and your target group gets created. Now this target group will be added on to to the
load balancer in the next step. Guys, make sure that the the configuration should be exactly the same. uh what I
mean to say is that uh in terms of for example uh the the details I've included let's suppose uh the protocol and port
should be http80 fine uh this is quite important uh because this is the traffic that our
targets will receive and the h protocol is http and you go with the uh lowerase/ index html
okay so the settings should be same as it is. Don't change the settings because um based upon our u website
configuration, these settings are applicable to us. I show you one more time. So if you for example click on
create target group, you have configured everything over here, right? You have configured everything over here. Okay,
you have configured everything on the first page. Click on next. Don't do anything over here in the registered
targets. You don't have to configure anything. Okay, click on create target group at the bottom right hand corner
straight away. That's what you have to do because the targets will be registered afterwards. Will not add on
any targets or instances to the uh this target group. Right? So what's a target group? It's a
group of targets and uh those targets will be added afterwards. right now because we'll be we'll be using
autoscaling to deploy the targets and they will be added automatically based on the confusion. So in in the in the
advanced level once we go to the start configuring autoscaling we'll we'll add the we'll merge the target group with
the autoscaling group. So whatever is autoscaling deploys will show up in this target group afterwards. There will be
an overlap. You you get to know this this u this concept. Okay. Now guys, one thing um once you're
done with this thing, one thing you need to do is that if you go to AMI, uh you go to AMIs, please confirm that the
status of your AMI is available. The status is available. Okay. Now, this means that the image is
created successfully. You can go ahead and terminate the instance that you deployed just for the sake of getting
this private image. This instance that you deployed just for for the sake of launching the app upon it and getting
the image out of it, the private image, this instance is no longer needed. You can go ahead, you can right click or
terminate this test or go to test state and terminate the instance. You don't need this instance anymore. Now the
purpose of having this instance is to just launch this test using a public image. We use the Amazon Linux 2023 AMI
or Amazon Linux AMI. install the Apache PHP app upon it and took the private image out of it. That's it. Right? This
image if you use now from now onwards will install the same app upon uh same operating system, same application upon
the next instances that we're going to deploy. So our mission isn't complete so far. You can just go ahead and terminate
the instance. We don't we don't need assistance anymore. Now the purpose of the system is to get the image out of
it. That's it. Fine. Go to the inst instance dashboard. On the left hand side we have instances
right we have instances. You choose that you have you had uh launched previously in the first step. The first 14 steps
you have to choose that uh instance. Go to test state and click on terminate. Okay just a quick info guys. Intellipath
brings you executive post-graduate certification in cloud computing and devops in collaboration with IHub, Diva
Samper, IIT Rudkey. Through this program, you will gain in- demand skills like AWS, DevOps, Kubernetes, Terraform,
Azure and even cutting topics like generative AI for cloud computing. This 9-month online boot camp features 100
plus live session from IIT faculty and top industry mentors, 50 plus real world project and a 2-day campus immersion at
IIT Ruri. You also get guaranteed placement assistance with three job interviews after entering the placement
pool. And that's not all. This program offers Microsoft certification, a free exam voucher, and even a chance to pitch
your startup idea for incubation support up to rupees 50 lakhs from IAB Diva Samurk. If you are serious about
building a future in cloud and DevOps, visit the course page linked in the description and take your first step
toward an exciting career in cloud technology. Right. So if I go to AMI, AMI is over
here. when I issued the command create image, I got this image, the private image. Create template is nothing but
it's a it's a launch template or the launch configuration. Uh on the left hand side, you will see that
there's something called launch templates. Launch templates, right? Um this is what
we're going to be discussing. Uh the the downgrade version of of launch template is launch configuration. Basically a
launch template is nothing but it's a document. When you when you click on launch
template from the instance basically all the instance settings are being captured in a document itself in a template you
create that template afterwards when you when you when you have to deploy the same instances again and again with the
same settings same AMI same instance type you can use that template. Okay. So that is also one of the options out
there. You can create the image from the instance. That instance will give you a standard parameter uh in terms of what
what type of operating system you would like to have the security groups uh the key pair that can be done in launch
template. But when we create the image, we're able to capture the software side of it. The the operating system plus
apps and services bundled together in a package. So image will create the the private image and launch template will
capture all the details of the instance in a single document and you can use the document again and again to deploy the
similar instances that the difference see right now um we need to understand this thing whenever you create the
private image or private AMI first template doesn't use EPS templates are just a document right in
the in the next step forward um we'll be able to create the document. So that is already included.
You can see that it says create a launch template. Okay. On the fifth step, you'll be able to understand what's the
launch template. Okay. So basically just to reiterate what I said, it's a simple document which encapsulates all the
confusion parameters of the instance including AMI, test type, keep your security group, everything.
AMI is nothing but it's a software configuration. That's it. So you have to images is is is a is a software
configuration or it's a it's a packaged bundled software that you use to deploy the instance. Template consist of the
parameters in the form of a list in a document then that you have to refer to to deploy your instance. So on the fifth
step you'll be able to see exactly see the uh the difference between them. In the previous day we we just uh went
ahead and we started. Now they have deprecated autoscaling. I think they have deprecated the launch
configuration. Go one second. Okay. They I think launch configuration
was showing up over here. It's gone now. They've removed that from the list. No worries. We're going to be using launch
template anyhow. we'll be using in launch template which is the advanced version
of it. So the last uh component that we deployed was the target group. In the
target group we just uh mentioned uh the health check that is the most important thing. Okay. And uh basically
a target group is nothing but it's a group it's nothing but it's a group of targets or the group of instances
under the load balancer. Now this target group needs to be included in the load balancer. So the
next step is that we have to deploy a load balancer and the load balancer should be sending
all its traffic to the to this target group. Fine. Now I will just uh take you to the next step next level and we'll
just create a load balancer or deploy it. If I just go to launch templates. Yeah, I think that's been deprecated.
There was the there there was a lot of uh um updates going on that the launch configuration would be removed. I think
they removed that. That's okay. We'll be using launch template. So I'll show the steps in terms of the
load balancer deployment. Uh I'll show the steps. uh the steps would be show the steps
right now. So I'll show the step I'll go go a bit slow in terms of low baser configuration because um I have to
explain a lot of theory in terms of u how you config the robanser what's the purpose of of every step the
types of robansers so I'll be discussing those things with you as well now right now I'll be going from from step 21 and
just take you up to 21 to 23 only that's Fine. So I'll be going
now in terms of load load balancer you have to understand this thing that once you start uh with the load balancer
deployment to get started the very first thing is that you you have to be on the EC2
dashboard right so once you go to EC2 uh it will just take you to the E2 dashboard right
on the left hand side where you can see some of the menu options you have this thing called load
balancers load balancers and target groups. So I go to loop passes, I click on create. Now I have three main loop
passes in front of me. I can deploy either of these three. Now if I choose any one of these three, the
the basic functionality, the common functionalities of load balancer, I'll be still be able to leverage. I'll be
still be able to access those basic functionalities of a load balancer. Right? But why we have these three
classes? Why we have these three different types of opasses? Even if the if we get the the basic functionalities
out of them because at certain level they are they are be used in different situations in different scenarios. For
example, application robuster is mainly fit for um HTTP and HTTPS websites. All right. So the amplification load
balancer is is the is the most basic and most common form of load balancer that you're going to be using.
In 95% of the cases you'll be using this one. In 95% of the cases you will be using application load balancer because
it fits the requirements. It takes care of the HTTP and HTTPS website. It can work for uh microservices,
containers, um on premises servers. It's a it's quite advanced load balancer
and it is suitable for two types of uh request to to process HTTP and HTTPS. Fine. So in 95% of the cases the
application balancer is more than sufficient. Now why it's called application balancer? Because it works
at the application layer which is HTTP and HTTPS which includes HTTP and HTTPS protocols.
Then we have network load balancer. Now network loanser is the one which is kind of you can say that it's the
advanced version of a bridge load balancer. It can take care of any type of traffic
HTTP, HTTPS, SMTP, DNS, IMAP, POP 3. So it can take care of any type of traffic.
Fine. So let's suppose you want to take care of the SMTP traffic which is for email servers.
It can perfectly work for in those situations. But that's not the the main advantage of
the network load balancer. Net rober is suitable in those cases when you have more than a million requests coming in
each second. You can see the clear definition over here. Operating at the connection level.
The network robes are designed or capable of handling millions of requests per second securely while maintaining
ultra low latencies. Right? So for example, if you have if you have uh uh let's suppose a kind of a
requirement when you're let's suppose going for uh OTD platform streaming service like hot star and uh you expect
that because of certain events uh or IPL or FIFA World Cup match you expect that millions of requests will be hitting
your your load balancer and you also have to maintain ultra low latencies which means you have to bring down uh
the lag and sometimes As you must have seen this thing you have the bit you have the best internet connectivity but
still uh there's a buffering going on there could be possibility because the a lot of traffic is hitting that OTT
platform it's not able to handle that that amount of load so that that buffering is kind of a lag for us so the
network the network load balancers are designed to or capable of handling millions of requests per second while
maintaining ultra low latencies. This means that the users will not face any lag or latency at their end.
So if you have that type of requirement then you could be using the network balancers.
Again I say I'm saying so in 95% of the cases the application master is is is suitable but in few situation like like
I gave you the example of let's suppose Amazon prime day e-commerce websites use the net net flow passes OTD platforms or
for example the food delivery apps like let's suppose Zumato has uh published uh an advertisement that that they're
giving 25% discount on every order for 5 days. So they they they know that a lot of users will be coming in and accessing
the Zumato platform to uh place the orders and get to 25% discount. So e-commerce uh sites, OTD uh platforms,
they generally use these type of robasses in in those situation when they expect that millions of customers will
be coming in and trying to access their platform. gate rub balancer is not being used uh that much. This is used for
third party virtual appliances. Let's suppose you you're running VMware virtual machines on AWS. You're running
VMware virtual machines on Amazon web services and in that case you can make use of gate load balancer. This is the
rarest of the cases where you're going to be using gateway load balancer. 95% of the cases you're going to be
using application and rest 5% network. That's it. in my gate loser was deployed almost uh less than 18 months back but
in in those last 18 months I I haven't uh seen more than one or two deployments of g balancer right why because scalance
is used for third party virtual appliances running on Amazon web services platform like for example
you're running VMware virtual machines on AWS uh very few companies do that because uh
generally the companies when they start uh uh having the virtual machines on on AWS cloud they will go for the AWS own
proprietary instances instead of going for VMware virtual machines. There could be some exceptions. Um sometimes for
example the instances don't support that those applications. So in that case VMware virtual machines support certain
applications uh to be deployed upon them. In those few exceptions you can make use of a gate load balancer. So it
is used for it's it's used to deploy and manage a fleet of third party virtual appliances
which is which is rare of the rare cases. So these are the two load balances that we're going to be using
and this hands-on we'll be using application. Why? Because it fits our requirement. First thing first we're
going to deploy an HTTP site. So and uh uh we don't we don't want to have uh we don't we don't expect
millions of requests coming in. It's only few clicks that we have to make. That's it.
So even if you have the requirement of tens of thousands of clicks of requests coming in per second, the appro is
sufficient enough to cater to that requirement. Okay. So I'll go with the application balancer. Click on create.
Now in this case the very first thing that I have to do is that I have to go ahead and just put a name over there.
For example, name it as httbt load balancer. You can put a simple name. It doesn't make any difference
even if you just put your first name. That's okay. That's a simple name that you can easily change afterwards.
Scheme. Is it internet facing or is it is it internal? Now in this case we will always use internet facing because if
you're using for for um the HTTP websites if you're using this thing
if you're using to deploy a website a low baser should always be receiving traffic from the internet.
Internal balances means that uh you just want to route the traffic for uh for internal internal operations. it will
not receive traffic from internet. Maybe the low pass is is receiving traffic from internal servers.
There could be a possibility when you're using low baser to just receive the traffic from internal components. Fine.
But in this case we always use internet facing because if you're going to use low bouncers for the website deployment
the load balancer should always be internetf facing because if it's internet facing it can receive request
for your website over the internet and forward those requests to where instasis at the back end IP address will go with
IPv4 right because the instances we are using right now they're using the IPv4 addresses which are which are
32 bits, right? So that's that's what you have to do. You have to start with with a low balancer deployment and in
the first two three steps you have to just go ahead and put a name. Make sure that the scheme is internetf facing.
Don't go with internal because uh we're going to deploy a public website and also the IP address type should be IPv4
which is a default one. Please go ahead and do that. Let us know. Why we using internet facing? Because we
want uh the low balancer to receive traffic from public users, internet users, right? So if you
don't use internet facing or which choose internal by mistake, what happens? The low bouncer loses the
capability to receive traffic over the internet. Okay, it will only receive traffic from internal uh internal
servers. Okay. So in that case we should be using the internet facing no answers. What's
next? The next thing is the 23rd step. I'm sorry 24th. In step number 24th we have to go with
the availability zones mapping. What is this? Step 24 only. What we're trying to
achieve in this case is we want to make sure that every available so see the thing is that based
on the best practices being recommended by Amazon web services the best practices uh which as uh suggest you
that these are the best practices that you should be going for. You should be choosing at least two availability zones
the instances of which can receive traffic from the load balancer. You should have at least
at least two a instances should be put across two aces to make sure that you have a highly
available application. Why it is so? Because if one a goes down of course that a goes down the instances will also
go down in that a at least you have the second a fully functional with the stresses running inside it. And in that
case it will guarantee that your uh your application has no downtime. Fine. So it is always recommended that
you just choose at least two ACs within the low passers configuration within the network mapping. Now in this case under
mappings I have a list of five a six as I'm sorry. If you're using different uh region that's that's a reference story.
Whatever is regions are showing up you have to check all the boxes. So we are saying that we want to when whenever
autoscaling launches the instances it can launch instances equally between any one of these as all all these
and the low basser can divert the traffic or route the traffic or send the traffic to any one of these tests in any
one of these. This is this makes means that I'm trying to make my application highly available
at least two a should be selected. For example, if I don't select two a right okay you can see that over here.
Select at least two availability zones and uh the low band set out traffic to targets in these availability zones.
Right. So I will be choosing all the availability zones in this thing so that I make all these authorities inside them
to become eligible to get or receive traffic from the low baser. Okay. You have to set all of them. Please go ahead
and do that. Let me know it's done. Yeah. So choose both the ACS in that case. Whatever as you have uh guys you
must be using California, Mumbai, whichever region you're using, you have to set all the aces.
Yeah, you choose both of them. Rabi, whatever's uh is showing up, you have to check all the boxes. In Mumbai, for
example, there are three. So, you have to make three boxes. But whatever boxes are showing up, you have to check all
the boxes, all the blue ticks. In my case, I'm using North Virginia, which has six easy. So, I'm making six ticks.
all the eases should be checked in your case because I want to make sure that I don't compromise in terms of making my
application highly available because let's let's imagine I have 25 instances or 100 instances those 100 instances
will be scattered between the six availability zones right autoscaling will try to maintain the equal number of
stresses between them of course we have um if we divide 100 by X we don't have the equal number but it tries to make
the makes makes a balance between the these different ACS it tries to deploy equal number of instances in each of
these ACS so this is being done fine all right so network mapping means that it will guarantee that we will deploy the
instances in all the ACS within the region see right now you need to understand everything is happening in
the same region the deployments that we are doing right now the AMI the low band The autoscaling
configuration everything is done inside a region. What's the next thing? The next thing is
step number 25. It is about the security of a load balancer or assigning a security group to the low load balancer
so that it can receive the traffic from the authorized sources and forward the traffic to authorized targets or known
targets. We will not go too harsh in terms of security group configuration. uh we
could um know um create a a trust relationship between uh the load balancer and the instant but right now
we just go simple we'll create a simple security group for the load balancer and um I think this this thing can be
skipped it's not that much important as of now but some people may face the issue okay let's do one thing let's skip
this step because uh what happens is by default the the load baser receives receives the HTTP traffic from anywhere
which we want. Hang on one second over here. Okay, let's do that because some people will start facing the issue if
you don't create a new security group because in that case we have to do some troubleshooting afterwards to prevent
ourel from tr troubleshooting. We can just uh create a new security group from from beginning and ensuring that we have
the right protocol included in the inbound rules. So I show you the step number 25 and um
25 to 29. Okay, see what I'm doing now? I want an inbound rule because I want the
the load balancer to receive the HTTP at traffic from the users because we're going to deploy an HTTP website. What we
going to launch? We're going to launch an HTTP website. So I want the robuster to receive the the same traffic but I
have to open that port at the security group level. What is the security group? A security group is nothing but it's a
firewall um which restricts the traffic which comes in or which goes out from a specific component. right now the one
which is there which says default I don't want that because I don't have the shity or I don't have the confirmation
that it includes that HTTP traffic inbound from from all the sources. So to be a safer site because um I want to
make sure that I don't compromise because later on we have to do some troubleshooting if we we don't have
that. So in this case what you have to do is there's something called there's a there's a line on the top which says a
security group is a set of firewall rules which control the traffic for your low passer. Select an existing security
group or you can create a new security group. Now once you click on this one this will open a new tab for you. Now
don't close the previous tab because people get get confused when switching back and forth between these two tabs.
Okay, you have to click on this option. It says create new security group. It opens a new tab for you. Please don't
close down the previous tab. Okay, click on create new security group. It opens a new tab for you, a new page for you
where you can just go ahead and start creating a secure group. Just on this page, let just type in done once you're
on this page. Great. Now it says that uh security group name. You can put put anything over here. For example, name it
as uh elastic or let's suppose I put this as app load balancer
sg and uh I can copy this and paste same in the description. I put an uh name and
the description. That's it. Fine. Now in the inbound I add a rule where I just allow the HTTP traffic from anywhere
IPv4. That's it. Make these three selections or perform these two three three things. Name the secret group. You
can copy and paste the same name in the description. Don't change it. And include a inbound rule
where you just allow the HTTP inbound from from anywhere IPv4. Please do this thing. What's wrong with this?
Anywhere IPv4. You have to choose anywhere IPv4. Okay. Once you do this thing, you need
simply scroll down. Don't change the outbound. Click on create security group. Let me know there. Once you've
done this thing, click on create secret group. Just have done once you have once you have created this one. You don't you
don't have to change that bump. Simply scroll down at the bottom right hand corner. Click on create security group.
Okay. Now what you have to do is that you have to go to the left tab. You have to go back to the lasses page.
Right? So there are two tabs opened. You have to go to the tab on the left hand side. Okay. The browser tab
and you have to cancel the default one. There's a there's a cross there's a there's a a cross one. You just click on
cross and this gets away or it you you take away the default one. Now there's a refresh
button over there. You hit refresh. Okay. And once you hit refresh, you'll be able to search for the low baser that
you created. Okay. Basically, you have to remove the default one and apply the security group that you just got
created. You have to make sure that you hit refresh, remove the default one and then apply the secure group that you
just created. Right now you have to go back to the previous tab. So basically people get confused. They are on the on
this tab. They will now say okay I somehow uh need to go to the passes. Now there's another tab opened on the left
hand side. You go go over there remove the default one or hit refresh. Remove the default one and apply the one that
you just got created. Yeah. That's okay. you apply the uh you apply the security group first, right? And then you cancel
it out. If you're talking about the default security group coming up again, that's okay. You just
look for the application security group that you just created. You select it and you will have two boxes. You have to
cancel the default box. That's it. So what we have done is that we have ensured that the load balancer has only
one type of request coming in which is the HTTP request from all the sources. We haven't included any other protocol
for example HTTPS because it's only for the uh secure website. We have to basically also go for uh a sec uh a
certificate. We have to upload a an SSLTS certificate. We have done that. So any other protocol is not allowed
whether it's HTTPS, it is uh MySQL, IMAP, it's only the HTTP request which is coming in. That's it. Done.
Okay. Now what's next? The next thing is that you have to go with the step number 31. I think that's yeah 31 31 and 33.
Okay. So in this case when you go to the uh when once you scroll the page down it says listener
routing. Now a listener is a is a process of listening to the request. What I want loasser is to um to ensure
that it listens to the http8 traffic and forwards the request to a target group. Now which target group you're
talking about? It's the one that we we created in the previous step. If you just go to the
drop-own list, it shows us the target group. This is the same target group that we created earlier in the previous
step. So I'm I'm saying hello bouncer, please listen to the HTTP uh 80 traffic and then forward that that traffic to
the the this target group which will eventually consist of my targets or instances.
Fine. Once you go once you go ahead and do this thing simply go ahead and click on create
robans and your robas gets created. Let me know let me know with this thing based on the document you have to go
with the step number 31 to 33. And if you just click on view load balancer your load balancer is now in the state
of provisioning. You can see that the state is provisioning. Not able to uh see target group. Can you
There's a refresh button. Can you just hit refresh and see if it's showing up? Hit refresh.
There's a refresh button on on on the right hand side of it. It's not showing up. Uh that's that's weird case. U first
thing first, you have to uh ensure that you are in the same region where your target group was created. Okay, your
region should be the same. Now if you already have created the target group and it's not showing up then the only
thing that you have to do is that you have to refresh the complete page which means that you're trying to configure
the rober right you're trying to configure the rober you have to basically it's not showing
up you have to refresh the complete page now once you refresh once you refresh the complete page you will lose the
confusion that you have done so far and you have to start from beginning but before you start from beginning make
sure the target groups shows up. Okay, you will lose the configuration of the
rober because the entire page is refreshed. Scroll down and look for the target
group. If the target group is still not showing up, this means that it's not there.
Okay, still showing active. Should I create a new one? S I there's a troubleshooting step I asked you to to
do when you just give give me one response that once you just completely refresh the page right you just uh
refresh the page and if you scroll down can you see the target group right now yes or no
can you show the target group step again why there's a need to show it again you're talking about the the the low
balancer one I I'll just come to you the target group is not created Or there could be a reason that you have by
mistake switched to a different region. Create a new target group. In that case, there's a possibility either you didn't
create the target group or you didn't you you have switched to a different region by mistake. So San is telling me
to you asking me to uh show you the target group step again. Basically this is the listener routing step. Once you
go to listeners and routing, you have to go with protocol and port HTTP. Just keep it as it is. Then it says default
action forward to select a target group. You have to uh go to the drop-own menu and the target group shows up. You have
to select it. Once you select the so in the listener routing configuration you have to make sure that the protocol HTTP
is being selected and it forwards the request to the target group. So you have to select the tie group in the list and
once you do this thing thereafter you have to straight away click on create loop balancer. Okay
we're all set as of now everything is going perfectly fine based on what we planned. Now the next thing that comes
up is the we have to focus on the autoscaling. The next thing that we have to uh so we
have to focus on the the next part which is autoscaling. All right. Um hang on one second.
Based on the documentation we have to create a launch configuration which is step note 34. But because configuration
was there it's been deprecated. It's being discontinued from the dashboard. So I didn't have the privilege to change
the documentation because uh I just uh saw that it's been removed but that's okay. Uh we create a a launch
template instead of launch configuration. Launch template basically is the advanced version of it or
basically gives you some more advanced options. Now what is this launch template? this launch competition we we
are we are talking about see um when I discussed with you autoscaling I discussed with you few of
the important ingredients and the the main thing that I discussed with you I discuss this thing with you that when
the autoscaling has to initiate a scale out event what exactly is meant by scale out event
if if you use the term scale out event what do you mean by that so autoscaling if it it has to initiate a scale out
event it has to launch this Right? Scale out means launching. Scale in means terminating.
So autoscaling has to initiate a scale out event and it it it has the the task to launch instances.
The problem lies in this case is that how the autoscaling we understand that what should be the AMI the instance type
the key pair the security group the the storage linked with the instance. We basically put this uh all these
parameters in a template or document. The document is called launch configuration or the launch template.
Now this launch configuration is kind of deprecated now. It's no longer supported. This is deprecated.
So you not be able to see this thing showing up now. But now they have just removed that from the dashboard. very
unfortunate for us because it will be easier for easier for us to use to use this option but that's
okay launch template is also easy so launch template will consist of all the parameters which we which
autoscaling needs to deploy instances with exactly the same parameters
right what's autoscaling group autoscan group is the group of identical instances right so I gave the definition
An autoscan group is a logical collection of identical targets. What do you mean by identical? They have the
same AMI, same instance type, same key pair, same security group. So in this case to make sure that they have the
same identity, they have the same software and hardware configuration, same security group applied to them. You
have to put all the things in the template. Autoscaling will use this template and deploy identical instances.
Okay. So when you go back to the EC2 dashboard on the left hand side you will see a
bunch of options out there right. So under the instances we have 1 2 3 there's a there's a third option which
is called launch templates right this is this is the one launch templates just on the left hand side where where you have
the instances you have instances instance types and launch templates you have to click on launch templates
go to launch templates let me know once you're on this page just tab done once you're on this page
in this you can find option switch launch configuration But in my case it's not showing up. It
depends on the region uh region to region. You must be using some of the region apart from North Virginia. It's
it depends upon region to region. So if I go click on create launch template I'm not getting the option as of now.
But it could be the regions uh if use if you use a different region in that case sometimes options change.
Anyhow, we are on the launch templates. I click on create launch template. Fine.
Now, once we go to create launch template, the very first thing is that we have to put a template name. I name
it as for example HTTBT launch template.
Fine. And uh you can just put some description. You can just copy and paste the same in the description. That's
okay. Fine. And yeah, first thing first, add a label.
I'll go a little bit slow because this is not in the document. Put a template name. Put something in the description
or just copy and paste the same thing. Let me know when I'm done with this thing. Just type done. Assign a name and
a description. It's not there in the document because the document we have I've shown this
test to configure the launch configuration. That's why I'm going going a bit slow so
that no one lags behind. Done. Now once you uh perform these two things, once you just put a name and
description, there's something called autoscaling guidelines. It says select if you want if you intend you to use
this template with the easy to autoscaling, right? You have to check this option provide guidance to help me
to set up a template that I that I can use with autoscaling. Just check it right now.
Done. You have to check this option. Provide guidance to help me to set up a template so that I can use that with
auto scanning. Please check this option. Okay. So basically it will show you uh show us the the important options which
the autoscreen needs to initiate a scale out event or increase the number of instances or launch instances for me.
Right? Once I I'm able to perform these three things I just go down to the application and OS images. Fine. Now in
the application OS images you have to application OS images you have to choose owned by me. Okay.
And the drop-own menu you have to choose the one that you created earlier. This one. Go ahead and do that.
You're saying this image is owned by me. I own this image. I own this image. Fine. So once you uh once you uh create
the template the you also have to include the image. So basically there was there's a question out there uh
difference between image and the template. Image is the subset of template.
Image is a subset of a template. Template is a large overview of the things to be deployed. Right? And image
is a part of it. Right? So that's why create the image because the image has to be included in
the in your document in the launch template in the launch configuration so that the instances can can pick up this
image and deploy the instance with the same Apache PHP application upon upon it. Done. Are we done with this?
What's next? Next thing is test type. Um in this test types you select all generations
all generations and look for T2.micro.
Now I think T3.micro will also work but I think that's okay. Yeah look for T2.micro and select it. In your case if
T2 micro doesn't shows up then then choose T3.micro. Please select it as test type
T2.micro. Just in case T2.micro doesn't shows up, then you have to choose T3.micro. Please
go ahead and go ahead and do that. Can you see what exactly what we're trying to do? We give a checklist the
list of ingredients, the list of parameters that autoscaling will refer to and based on that it will deploy the
STS with exactly the same configuration. This is done. What's next? Keep pair login. In the key pair login, choose
don't include in the launch template. We don't have to go with the key pair. Make sure that you have the default one
selected. Don't include the launch in the launch template. We don't need the key here because we don't have we don't
need to SSH to the SS. There's no need for the keeper. Choose don't include in the launch template. Let me know with
this done. Okay. What's the next thing? You have to straight away go to security
group. Okay. Now we want to allow the HTTP traffic from anywhere. So what you can do is that you can go to select
exist security group and choose the same security group that you apply to the load balancer same one because resters
have received the HTTP traffic right so you select the same security group that you apply to the load balancer same one
please go ahead and do that just done with this if you wish you can create a new one but
just just to save our time because we just have to include the inbound HTTP traffic from all the sources. The same
inbound rule that we applied to the low band ser so we're going to be using the same security group because it has the
same inbound rule done. Okay. What's next? The next thing that you're supposed to do is you scroll down. Don't
go for storage volumes. Go don't go for tag. Um this is this is purely optional. Don't go for advanced details. Okay. We
don't have to configure any of the advanced features as of now. No, this is not required. Okay,
you don't have to configure anything. So, what we have done, we have put a name. Uh the main thing is that we
choose the image. We choose a test type for the keep pair. We went without the keep pair and we
apply the security group which which allows the HTTP inbound from all the sources.
Fine. We have just made these selections. We could have gone to some more details
but right right now these things that that we selected are more than sufficient for us. So rest of the
parameters will be taken by default. Okay, we're not configuring the storage resource tax advanced details. You will
pick the default values for them. Click on create launch template and your launch template gets created and then go
to uh view launch templates and your template it's created. Let me know once you are
on the this launch templates page. Just tab it done once you're on this launch templates page.
Okay. So right now you have put a list of parameters for the instances to be deployed. So the instances with exactly
the same properties, same AMI, same test type, same keeper, same group would be deployed automatically.
