Overview of Cloud Computing
- Definition of cloud computing as the delivery of computing services over the internet.
- Importance of cloud computing in modern technology and its growing demand in the job market. For a deeper understanding, check out the summary of 100 Most Important MCQs on Cloud Computing by Neptel.
Real-World Applications of Cloud Computing
- Examples of cloud computing in daily life: Google Drive, Netflix, etc.
- Benefits of using cloud services for businesses, including cost-effectiveness and scalability.
Case Study: Ravi's E-commerce Project
- Ravi's challenges with local infrastructure and how cloud computing provides solutions.
- Comparison of on-premise vs. cloud computing for storage, compute power, networking, testing, DevOps, and scalability.
Cloud Deployment Models
- Public Cloud: Shared infrastructure for general use.
- Private Cloud: Exclusive infrastructure for a single organization.
- Hybrid Cloud: Combination of public and private clouds for flexibility.
AWS Services
- Introduction to AWS and its key services like EC2 and S3.
- Explanation of EC2 instances and their configurations.
- Overview of S3 buckets and their role in data storage. For more on AWS management, refer to Understanding User Profiles, Roles, Permissions, and Access Control in Genesis Cloud.
Security and Access Management
- Importance of IAM (Identity and Access Management) in cloud security.
- Explanation of roles, policies, and permissions in AWS. To learn more about managing infrastructure as code, see Mastering Terraform: A Comprehensive Guide to Infrastructure as Code.
Hands-On Demonstration
- Step-by-step guide on creating EC2 instances and S3 buckets.
- Instructions on setting up security groups and IAM roles.
- Demonstration of uploading files to S3 and managing access permissions. For a practical approach to containerization, check out Docker Tutorial: Comprehensive Guide from Basics to Advanced Concepts.
Conclusion
- Recap of the benefits of cloud computing and its relevance in today's tech landscape.
- Encouragement to explore cloud computing as a career path with available resources and courses.
Ever had that moment when your phone pops up, storage almost full, and you're like, "Oh, not again." So, what do you
do? You start uploading photos to Google Drive or Google Photos to clear up space, right? Or let's say you're
watching a movie on Netflix or Prime Video. No downloads, no waiting time, and everything just streams smoothly
even in HD. Have you ever wondered how all that works so seamlessly? Well, the answer behind all these is something
called as cloud computing. Cloud computing is the delivery of computing services like storage, servers and
software over the internet instead of your local device. You're not really storing the movies or those photos on
your phone. They are actually sitting on powerful computers somewhere far away and you access them through the
internet. It's like renting space on someone else computer to store your data or run your app. And guess what? In
2025, there is already a huge demand for this technology. Almost 90% of companies in the world consider themselves cloud
first and thus there are lacks of open cloud computing jobs available but the supply of skilled cloud engineers is
still very limited. This course is designed to get you skills and framework to crack cloud computing jobs and
achieve your career's objective. We will take you from the very basics of cloud computing to work on AWS hands-on. So if
you're a student, job seeker or IT professional interested in cloud, don't skip. Watch this video till the end and
take your first step into the future of tech right here on Intellipad's YouTube channel for absolutely free. Let's start
by understanding cloud computing in simple m. Meet Ravi. He's the founder of a small software company based in
Bangalore. His team builds web app for clients. They have just landed a big project, an e-commerce site for a global
site. This sounds very exciting, right? But also scary because Ravi's local infrastructure is very limited. He
doesn't have big servers, high-end GPUs or a fancy data center. And building one would take lacks of rupees, months of
setup and constant maintenance. And still there would be risk against cyber attacks, natural disaster which can whip
around losses for Ravi's company. So what can Ravi do at this moment? He came to know about cloud computing. So what
is this cloud computing? Is it like data center sitting up in the clouds? No. In simple words, cloud computing means you
don't need to own the physical hardware. You rent thing like storage, server and networking over the internet and you
only pay for what you use. And since all these resources sit virtually in the internet and can be accessed with the
use of internet, this way of accessing resources is termed as cloud computing. So simply put, computing that happens
over the internet. Let's break down how cloud computing can help Ravi with his e-commerce project. We will compare what
he would need to do for an on-promise setup versus what can he do on the cloud in any software development project.
Even before Ravi's team write a single line of code for the mobile app, they need to set up a few essential. For
example, a server to run the back end, storage for user data, images and transaction, a secure way to connect
components, a place to test features safely, tools for automatic deployment, and daily backups for recovery, and the
ability to handle sudden spikes in the user. So, let's take these attributes one by one and see how cloud computing
makes Ravi's job 10 time easier than going with physical onremise infrastructure. So first up is storage.
This is where all app data lives. Ravi's app lets user upload profile picture, store order history, save payment
records, maintain session data. So if all this has to be stored on premise, he would need to buy and configure physical
servers, attach hard drives, set up cooling system, and run backup routines all while praying the business doesn't
have a power outage. With cloud computing, Ravi just configures cloud storage units such as AWS S3, and that's
it. It's encrypted, backed up, and instantly available. Next up is compute power. The brain behind the app. So
whenever a user logs in, updates their card or stream a product video, Ravi's backend has to process those action
fast. If it has to be onromise, Ravi would have to buy physical server, install operating system, keep them on
24/7 and upgrade them whenever traffic increases, meaning he will have to buy more hardware. On the contrary, with
cloud computing, Ravi spins up virtual servers on demand using AWS EC2. He pays only what he uses and when the traffic
tips, he shuts them with zero waste. Third on the list is networking. Ravi's apps front end need to securely talk to
the back end using APIs. So if Ravi were to implement network security on onremise setup, he would have to
manually set up routers, install firewalls, open only required ports, and hire someone to monitor the threat. On
the cloud, he just have to configure a VPC, a secure isolated network where only specific services can communicate
with each other. The traffic is encrypted, everything is logged. So Ravi don't have to worry about the threats.
Next up is testing. Let's consider that Ravi has built an app and it's live. But his client wants him to add new refer
friend feature. To do this on premise, Ravi would need to replicate the entire production environment. That means more
servers, cloning the database, increasing the cost, plus the risk of accidental breakoffs. But when done on
the cloud, Ravi could just create a staging server with a click. This new feature is tested without touching the
live app. Once it is approved, it goes live instantly with new update. No drama at all. Fifth point is DevOps and
automation. So let's consider that Ravy's development team commit code every single day. For the onremise
setup, every change would mean someone like specialized DevOps engineer sitting there and doing manually builds, test,
bash scripting and deploying the app which is very slow and then again errorprone. But when relying on the
cloud, Ravi simply integrates GitHub plus AWS code pipeline. Now every code push is auto tested, auto deployed,
monitored in real time. His team focuses on coding, not babysitting the servers. Sixth point is scalability. One day this
e-commerce app goes viral. Suddenly 5,000 user becomes one lakh. If this happens in onprim, there is possibility
that Ravi's fit server might choke leading to app breakdown and clients disappointment. But when things are
worked on the cloud, Ravi could just enable autoscaling. So when the traffic rises, more servers spin in
automatically. When it falls, they shut down to save the cost. Smooth experience for customer. No panic for Ravi and the
happy client. So you see cloud computing in every step proved to be better than that of on premises. That's why almost
96% companies today call themselves cloud first and are using cloud services one way or the other. Building an app
like e-commerce without cloud computing today is like trying to run a race barefoot on broken glass. Cloud
computing is cost effective since it uses pay as you go model and provide services with just clicks whenever you
need. It's fast, it's flexible and is the backbone of modern software development. Moving on, let me walk you
through different deployment models cloud computing providers like AWS, Azure or GCP offer. First up is public
cloud. You can consider it like taking a bus. You don't own it but you share it with others. Public cloud like AWS,
Azure, GCP provide infrastructure over the internet for anyone to use. Most startups and dev prefer this. It's
affordable and easy to get started. Then we have a private cloud. Now think of a private car built and used by one
company only. Private clouds are used by big enterprise needing full control customer compliance or specific security
setup. More expensive but fully owned and operated by one company. Finally, we have the hybrid cloud. A mix of both
public and private cloud. You use public cloud for general stuff, private for sensitive workloads. This is flexible,
cost effective and popular among growing companies. So there you have it. You just understood different type of
deployment models, how cloud computing is better than on-premise setup. You discovered how it's cost effective and
easy to access and manage. You won't wonder now about why Ravi choose cloud computing. By the way, if you're
interested in pursuing cloud computing or DevOps as a career option, we can help you learn more with our strong
curriculumdriven course. You can check out the link in the description to know more. So what is cloud computing? Is it
something like computing over the clouds? Well, not exactly. Cloud computing simply means that you're using
the computing resources, be it your storage, servers or your applications that are already present over the
internet, but you are accessing it all without using a physical hardware like you don't need a separate hardware to
like access your Netflix or YouTube or whatever. You can access it from anywhere and from any device. You just
need your login ID and password. So that is what cloud in action is. Now let's just understand it with a simple life
example. For suppose you have thousands of photos and videos to store somewhere. What you can do is you can either
purchase a hard drive and store it there. But what if the hard drive get crashed or get lost and you have to
carry the hard drive all over with you to access those photos and videos. Now what you can do is you can just put all
those photos and videos into a Google drive that might be already present in your phone. Now what is uh it makes
everything so simple that you can access it from anywhere. Now now you have no worries like if it get lost or you need
some other device for it. So that is cloud in action. So cloud computing simply means that you're accessing all
those computing resources be it your storage or your server databases or apps without owning a physical hardware which
means you are just having the internet and you can access all those things in your laptop or in your device whatever
you are using. Let's take a quick review like where you are already using cloud in your day-to-day life without even
realizing it. First suppose either you are using Google Drive to store your photos online or you're using Gmail to
send or receive emails that are all done on the hosted Google servers or you're using Netflix or YouTube to stream the
daily videos whatever you want or the films that you want to have it all are already present on the servers. You just
have to access it from your devices or you're using Instagram to have like sharing the photos or scrolling the res
all are done using the cloud computing technology. Similarly, we have Zoom video calls to handle via the cloud
service. Now that we have understood that what cloud actually is and with a real life example too. Let's just
understand why do we actually need cloud. First suppose you are going into a new city and you want to roam around
the city, travel around it. So you have two options. Either you can rent a car or you can own a car. But what are the
disadvantages that come after owning a car is if you buy a car you have to pay a lot of upfront fees of course to buy a
car and then you have to pay for the insurance for the maintenance for the fuel and for the repairs that is the
second disadvantage that we have and the very important thing that you might not even use it everyday basis and but you
are still paying for it when it is not in use. Now with a real life example only we can see that if you are not
using it regularly you can just use a Uber instead like you can rent those services. What you have in this you have
no maintenance no upfront cost you only pay when you use it and you can upgrade to a better version whenever you want.
So that is the advantages that we have in the cloud also. What you do you don't need to buy those expensive servers and
manage them. You just need to rent those computing resources whenever you need and stop paying what you don't. So that
is just like Uber. It's flexible, cost efficient and stress-free. So now that we know that how much easier our life is
made using the cloud technology, I would like to tell you that back in the day we didn't have such technology. We used to
sit in a room filled with physical servers where we have to manage each of the physical server even when they are
not in use. And we have few disadvantages along with it. But that's not the point that we exactly jumped
from the traditional IT services to this uh cloud computing. We have one more important thing that is called
virtualization which is a very important topic in our cloud computing syllabus. So let's understand each one of them
step by step and how we came to the cloud computing that is our today's world scenario. For explaining you guys
what is traditional IT or what are the physical servers that we had back in the day. You can just imagine yourself
sitting in a room filled with physical servers and you have to manage each one of them individually. So what were the
problems that we had back in the day was you have to pay a high upfront cost because yes there are so many physical
servers that are present in your room. So of course you have to pay a very hefty amount for that and we have second
disadvantage that is wasted resources. Of course when you're using so many hardwares and some of them are not even
in your use. So what is this? This is the complete wastage of the resources and we have manual maintenance that you
have to manage each one of the servers and you have to take a maintenance for that. So you have a lot of charges and
you have to pay again a lot of hefty amount for that and again it was hard to scale like if you want to manage those
servers from another room you have to carry each one of them individually to another room or to another place. So it
was very hard to scale such technology. So to eradicate such problems that we had in traditional IT service what we
came up with we came up with something called virtualization. Now let's understand what virtualization was. See
virtualization you can understand it. It's like a smart hardware use. Why? Because since back in the day you had
physical servers each one of them having its different OS and you have to have one individual server for accessing one
operating system. Now that you had virtualization, you can split one physical hardware into many virtual
ones. Now let's understand it what does it mean uh using a simple diagram. If you can see this diagram, you will be
able to understand that what is actually happening inside the virtualization. First of all, we have a physical
hardware. Be it your CPU memory or your storage or network, whatever you have, this is a physical hardware that you
have. And just above that there is a virtualization layer in which we have our hypervisor. Now this is the
component that is actually used to split those physical hardware into the virtual ones where you can operate each virtual
system individually just like mentioned here. You can understand it by a simple example. For example, if you are working
in a tech domain, you understand that you are having one laptop and you're running an operating system called
Windows there. But you can also have a similar operating system that is Linux into it. Why? Because that is where
virtualization actually works. Like you can have two different operating system into one device only and you can access
both of them. So that is what a real life example is of virtualization. So let's understand that virtualization is
the process of creating a virtual version of server, desktop, storage, device or even an entire operating
system. It allows one physical system to run multiple virtual machines each behaving as if it was a separate system.
When you are using Windows and Linux both of them in a single device, what's happening? You can use both of them
individually as both of them have different software dependencies but the hardware is able to manage it. Why? All
because of the virtualization that came into the existence. Now the benefits that we had was of course there was a
better resource utilization because we have to have just one physical hardware to run so many operating systems on it.
It was easily manageable as simple as that and it was faster setup than the physical servers of course like to be
filled in a room with the different physical servers each having different maintenance and different type of
operating systems. You can just have a one physical server and maintenance is quite low in this. Now that we had
virtualization back in the day, there was still one more problem that we had that we still needed one physical
hardware to run those virtual ones. So what came into the technology was something called cloud computing that we
are working. Cloud computing simply means anywhere anytime and pay as you go. Why is that? Because for suppose
you're using AWS, Azure and GCP. Some of you might be well aware about it. So when you're using that technology what
you are doing you are just renting those servers or renting those computing powers all over the internet just for
your own purpose and in AWS when you're creating the instances that we will be doing in the videos further what we have
to do we just have to make an instant and we just have to pay the charges as much as we are using it. So that is
cloud but was it was like pay as you go like whatever the cost efficiency you had whatever the budget you had you can
use the computing resources according to that and anywhere and anytime of course you just need a ID password for it and
any device would work any device would be compatible to use it now that we have looked into the picture deeply let's
just compare each of the services step by step like how we went from ownership of like all the physical servers that we
have then to sharing the virtual servers when we add virtualization and how did we come to the complete rental system in
the cloud. So what it looked like back in the day was you have to buy physical servers for the traditional IT servers
of course that we have discussed till now. It was expensive and hard to scale at the same time. So to eradicate this
problem we came up with one device split into multiple virtual ones that was virtualization. You have to run multiple
virtual machines on a single hardware. And what was a limitation? The only limitation that it had was it was quite
flexible at the time of the day. But what was the simple thing that we had still had in the problem was that you
still needed a hardware for it. Like one hardware was still needed for it. So we came up with a simpler thing like the
very simplest thing that you don't even need a physical hardware. Now you can just rent the things and access it as
much as you want. So what is cloud? You just have to rent the services from the cloud providers that we have and you can
use it as much as you want according to your efficient bud budget. So it was most efficient and the modern approach
is cloud only. Now that we have understood like what cloud computing is and why is it so important and how we
moved from owning the complete infrastructure to sharing through virtualization and now renting it using
the cloud computing. So let's dig into what are the core characteristics of the cloud and what makes it so powerful and
so popular among us. You can use it a simple acronym called command that can be used to remember like what all the
characteristics and what all the resources that we have in cloud computing. First of all we have O that
stands for ondemand service. Now what does an ondemand service means? You can just imagine it using something like
Zumato. Like whenever you feel like you want to order something, you just launch the app from your phone. So it's the
simple as that. You just order a food from your Zomato app. So you can launch the server whenever you want. So it's a
on demand self-service criteria that you have. Next we have something called broader access. Now what does broad
network access actually means? See when you are using an Instagram, it's not important that you need to carry your
own device. You can access it from the any device that you want. You just need a mail id and your password. That's what
I'm saying from the beginning itself that the broad network access means that you can upload the files or you can
access the files. You can scroll the re whatever you want all from any device. You just need your identity as your
username and password. Now the next thing that we have that is called resource pooling. What does it mean?
That the cloud provider the groups like shares the storage and memory and CPU among the many users. You can just
understand it like the electricity broadband is shared among lot of people we have in the colony or we have in a
particular home right. So that is what resource pooling means. Next we have elasticity. Elasticity simply means that
it can expand or shrink as a balloon on its own. For suppose many of us are using the simpler server or the single
server that we are uh using. So what happens there is caused a lot of traffic inside it. Okay. So what this cloud does
it automatically add some necessary servers that are used there and it can also just after the traffic is removed
it can automatically just remove the servers when they are not in use. So this is a very important and very
impressive quality that we have in the cloud. Next we have pay as you go for suppose you are just using a 10 GB
memory. So you're paying for that 10 GB only. It's not like you're paying for 100 GB and you're just using 10 GB. So
that's what pay as you go means that you only pay for whatever the resources you want and for how much time you want. Now
that we have understood this let's just example it with a simple real life analogy that what are these things and
how we are using it in our day-to-day life. First of all we have on demand self-service. I just said that you can
use it like ordering an app from Zumato or Swiggy whatever you want. You just uh have on demand like whenever you have
the demand you have that kind of service in front of you. So that is on demand service characteristic and the real life
analogy is ordering a food from the app. Now we talk about broad network access just like Instagram you have something
called Netflix like everyone offers uh like stream on the Netflix platforms. So what happens you can access it from your
TV also and from your device also whatever wherever you are comfortable with. So that is what broad network
access means. Now elasticity just example it or just think a balloon shrinking or expanding on its own
according to your preferences. That's what elasticity means. And when we have resource pooling, it's just like sharing
electricity among the colony or among your home or it's just like sharing a hotel rooms. Like we have lots of people
using the same resources. So we kind of do something called resource pooling. Just like a car pooling like many of you
are going into the similar direction. So just resource pool it. And the next thing that we have called measure
service simply like pay as you go like you pay the electricity or water bill just like as much as you have used. So
simply whenever we are creating we will be creating a EC2 instance next in the video. So wherever we are creating it we
are just paying for the only duration of time where we are using it. So that is another example or the characteristic
that we have for it. So now that we have understood the core characteristics also and what are the all the benefits that
comes with cloud computing. Let's just understand what are the cloud services models that we have and next the what
are the cloud deployment models that we have and the key service providers that we have in cloud computing. Now what are
the cloud service models are they're simply categorized among what they have to offer to the customers or the people
that are using it. And when we talk about the cloud deployment models it just simply categorize itself like how
the cloud models are deployed over the internet. And the service providers are very famous your AWS, Azure and GCP that
are quite popular among all of us. So just start with this next section that is cloud service models. Now that I have
said that cloud service models are categorized among what they offer to you guys, to me and to whosoever is using
those technology. So the there are three types of and three main types of cloud service models that are IAS, PAS and
SAS. Now what does it means? IA is being infrastructure as a service, PA being platform as a service and SAS being
software as a service. Now let's just understand all these type of service models with the help of simple example.
We have a popular term called pizza analogy for understanding all these three terms. Okay. So what we have first
of all we have IAS that is infrastructure as a service. Now for suppose you want to have a pizza and you
want to customize it according to yourself. But for now when we are using IAS infrastructure as a service we need
a entire infrastructure and we need to rent a entire infrastructure. So what does it mean that you are renting an
entire kitchen all the utensils all the things that are present in the pantry and all the things you are utilizing and
you're renting everything like you're renting an entire infrastructure for you to make a simple pizza. In PAS what
you're doing you just saying that I don't need a entire infrastructure now I just need a platform. Now you have
popular outlets of pizza like Domino's and Pizza Hut. So what you do, you just go inside it and you just say that I
want a pizza to be made for myself and I can make it myself. I just need the dough and the toppings with it whatever
you want. Okay. So the platform is completely ready for you and you just need to access it for the pizza or
whatever the services in the computing world you need. Now when we talk about SAS that is very popular and every one
of us are already using it. What does SAS means? that the entire software is ready for you like the pizza is already
ready and you just want to order it by just a call or just using a zumato or swiggy whatever you want to have. So
what do we understand by software as a service that the app or the pizza is entirely ready. Okay. So these were the
things that we had for the cloud service models and let's just start understanding each one of them step by
step by what you get in a particular service and how you handle that and what are the real life examples that we have
talking about the is that is infrastructure as a service it's the kind of a basic building block that we
can say you have the server or storage or networking all you want right in your hand what you get in it you kind of get
a virtual machine like EC2 Azure VM that we'll be discussing in the video further and you have to get a storage like we
have examples of AWS S3 Azure blob that are quite popular among it and the networking. So these are the things that
you get when you're renting an entire infrastructure. Now what do you handle in it? Since you have the entire
infrastructure, you have the entire ownership but you have a lot of work to do also. So first of all what you
handle? You have to handle the entire OS installation that you have to do in the setup. Then you have to do the app setup
and the security patches that whatever you want to have like you have the full access and full control of the entire
infrastructure but at the same time you have to handle a lot of things in it. And what are the real life examples that
we have? We have an EC2 instance of AWS that is a very popular cloud provider. Then we have Microsoft Azure VM that is
a virtual machine. Then we have Google Cloud Compute Engine. Now all these things are kind of a real world examples
that you will be dealing with when you are diving into the cloud technology into deeper. Okay. So the next thing
that we have to discuss in detail is called platform as a service. Now simply as I told you uh what do you mean by
platform as a service is you just have the entire platform ready for you and you just don't need the system set up
but you have to like write the code, deploy the code, you have to make the pizza by yourself. Correct? So that is
what platform as a service is. It is directly made for the developers who want to just focus on their app and not
the system setup. So what you get in it and entirely like installed operating system, pre-installed frameworks and
libraries, runtime environment and autoscaling. Now what does all this means that the entire infrastructure is
already ready in this in IAS you have to like do the OS installation and different type of like you have to set
the environment all by yourself. But not in this in this you have the platform already ready and what you have to
handle it. It's just writing and deploying your code like just you have to make the pizza and take care of the
toppings that you want to have. So what are the real life examples that we have in this is Google app engine, Heroku,
Azure app service and AWS elastic beanto. Now all these are the real life examples again in the technical domain.
If you're working with key cloud providers you will be able to understand this or if not then after some time of
course you will be able to understand this as we go deeper into this conversation. Now that we have we have
software as a service that is our last and the third most important key service model. What we have in SAS it's a
complete ready software ready to use software. You don't have to do anything in this. It's a complete software ready.
No installation and no maintenance needed. You can just assume it like you're using Gmail or Zoom or like Canva
or Instagram whatever the apps you're using. All these comes under the SAS model. Why? Because it's a readyto use
software. You just have to access it. You just need to have a internet link to it and then you can use it anywhere.
What you handle in this of course just using the app via the browser or the application. So the real life examples
that we have are again Gmail, Zoom, Canva, whatever the apps that are already inbuilt in your device and you
are using it directly. Those are the apps are called software as a service. So these were the examples and these
were the features that we had in the cloud service models. So let's just compare each one of them individually.
Right here when we talk about the features that we have in the three service models, you can know that the
control that you have is highest in infrastructure because you have the entire infrastructure in your control.
But when we talk about platform, it's a little bit medium and it's very lowest in SAS because the entire software is
being ready only for suppose you are ordering a pizza, you cannot customize it now according to your preference
because the pizza are already listed in the menu and you have to order it from there only. So nothing more sort of is
in your control. When we talk about the setup time, of course is takes a lot of setup time because you have to like do
it manually by yourself. You are setting up an entire infrastructure there. So the setup time is quite long in PS it's
quite medium and it is you don't have to have any kind of setup in the SS you just need to have your device to access
those softwares. Now user type IAS is basically uh inclined towards system admins and devops and PS according to
developers because you are writing and deploying the code there and SAS is used by everyone end users means like you me
everyone whosoever is using the app it's a globally recognized app when we call about examples again the same examples
AWS EC2 instance for the IAS Heroku and app engine for the PS and Gmail zoom canva instagram Netflix whatever we have
in the SAS is so the basic difference is SAS is a readyto-use software just like your Instagram, Netflix or whatever you
have. Then we have PS it's like a developers playground like you can write and deploy code all in the platform that
is available to you and is your own cloud infrastructure like you have the entire infrastructure in your control
and whatever you want to do with it and however you want to get the control of it. So it's quite in your control. Now
that we have understood the key service models, let's just dig into the cloud deployment models like how are they
actually made available to the users like how is the cloud technology made available to the users of it. So let's
just understand it with the three basic things that we have. One is called public cloud then we have private cloud
and then we have something called hybrid cloud. Now let's just take a brief example like what is public cloud?
Public cloud is like renting somewhere like renting an apartment or renting a house for someone because the owner is
not you. You're just renting it from somewhere and it is quite public for everyone because anyone can come and go
in that building right for suppose you're living in a society you're just renting an apartment in it. So it's
quite public for everyone. Now what is private? Private cloud means your own personal space like you are owning a
house, you have owned a house, you have purchased a house, it's your private property and no one can come and go
without your access, without your permission. So that is what private cloud is. Now hybrid cloud is very
interesting among these deployment models. Why? Because hybrid cloud uses the best of the both worlds like you can
have your private details in your private section in your private cloud and when you have like different things
like your name or age whatever you want to have like whatever you want to give the access to your different people you
can have it in the public cloud. So that is how hybrid cloud uses the best of both the technologies that is public and
private. Now let us understand each one of them in detail using the real life analogies and for you to get more
advantages of the features and the examples that we have. So you can see right here on the screen that public
cloud you know the services are provided over the internet by companies like AWS Azure of course the companies are Amazon
and Microsoft or the Google cloud. So anyone can sign up and use this resources like if you talk about
particularly AWS anyone can access it correct because it's like globally uh recognized and globally distributed. So
you can access it I can access it on the same platform itself on the same login page. So what are the features that we
have? It is a shared infrastructure because you are you are working on the same platform that I'm working on and it
is cost effective because again whenever you create an EC2 instance that we will be creating in the video further
whenever you are using it you just have to pay for the only amount of time where you are using it like for the duration
of time whenever you are accessing the EC2 instance or you're doing something in that. Next we have fully managed by
the provider. Now AWS is not managed by you and me. We are just the end users of it. the manager or the provider that is
Amazon that is the kind of the owner of this uh service. So it is completely managed by Amazon and the AWS admin.
Next we have easy to scale globally of course because since one person has created everything and it is shared
among all the groups all the people who are keen to use the technology. So it is easy to scale globally. Now the examples
are hosting websites or we have storing files like your Google Drive because of course Google Drive is also used by you,
me and everyone. It is the same app that is being used by each of the people and the running applications like Netflix
and AWS. Now what happens in Netflix and AWS also just understand it with Netflix if you want to have a common real life
analogy that you are having the same Netflix I'm having the same Netflix you will be getting the same recommendations
I will be getting but what happens your source of interest and my source of interest is quite different but what we
are doing we are sharing a public platform for our interest so that is what public cloud is now let's move
towards the private cloud that we have in action private cloud is like having your own gated data center of course
either on premises or hosted third party whatever but dedicated to your particular organization or dedicated to
you. Now whosoever use public clouds are like basically banks or governments or hospitals that you know run these type
of uh services that are quite confidential for everyone. So what are the features that you have in this is
you have more secure and customizable feature like because private cloud is being used by a private organization who
have all the data of it. So it is quite secure and customizable both. When I talk about used by highly regulated
industries then again it cannot be used like it can be used by a particular person but the key providers or the key
users of it are the banks and the governments who have some sort of private details of the entire
citizenships whosoever is having it. Okay. So next we have high setup cost and maintenance. Of course because if
you are having something private to yourself or to a organization of course it will cost little bit higher because
you are having more gateways and more security rather than public and hybrid crowd. And the next thing not shared
with others. It won't be shared with others until unless we have your permission or the provider has your
permission has your accessibility to give it to others. Next we have the examples. Of course, bank running the
core financial systems or government running managing the citizenship data or we have hospitals storing the medical
records. All of this organization you can see that are very private to their own citizens data or own customers data.
So that is how private cloud is used among the organizations. Now we have the very interesting section that is called
the hybrid cloud and you will be very amazed to know that yes you are quite familiar with this cloud provider. So
what this model does it is a blend of both private and public. Now what does it mean that it can allow data to move
between the private and public cloud again and again. So what are the features that we have in this? It uses
the best of both worlds like the storing of the information that is sensitive you can store it in private and this
something that can be accessible by everyone you can store it in public. Next we have flexibility and
scalability. Using the best of the both words give you the flexibility and scalability to use the both of the
systems accordingly and of course it is used for disaster recovery or traffic spikes. Now what do we understand by
this? Suppose you have a lot of data inside you like uh for a hospital also we have a lot of patients inside. So
what do we do? We just store the sensitive information in the private cloud and that can be accessed by
everyone like the appointment details or the appointment page it can be accessed by everyone. So we keep it in the public
cloud. Now take it an example uh an e-commerce website uses the private details of customers like their names or
whatever the orders they have purchased into their private cloud but the product browsing is similar for you and me
whosoever is using it for suppose we are surfing Myntra or Amazon the products that will be displayed there will be
same for you also for me also but the information that is quite private to each one of us will be shared in the
private cloud so similarly healthcare providers using private cloud for patient data and public cloud for
appointmentuling. That is how simple it is. So these were the cloud deployment models that we have in cloud public
private and again the interesting one hybrid cloud that uses the best of both worlds. Now since we have understood it
by a real life analogy let's just understand it again that in public cloud you are renting an apartment in private
cloud you having your own house so you can have the private data to yourself and in hybrid cloud is like owning a
house and you can do a Airbnb or you can book a hotel whenever it is needed. Okay. So who uses the public cloud?
Startups and public apps. Okay. Correct. And then we have private cloud. It is used by bank and government and
enterprises. Whosoever needs the private data to theirelves. And again the hybrid cloud can be used by the similar thing
like the healthcare services or the like e-commerce websites to store the patient data and the customer data alone and the
data for the browsing or for the appointmentuling in the public cloud. So these were the things that we have
discussed till now and I hope that cloud isn't feeling so cloudy anymore to you guys. So after this we are now moving to
the very important section that will be our key cloud providers. Now what are the key cloud providers actually means
that it is a third party company that is allowing all of us to use the cloud services that are be AWS from Amazon or
Azure from Microsoft or GCP from the Google. So these are the very important cloud providers that we have and we will
be discussing each one of them in detail step by step. Now the key cloud provider is just nothing that delivers the
cloud-based infrastructure or platform or software to the people who are ready to willing to pay as you go. Okay. and
you know for the organizations to scale scale their resources or without maintaining the physical hardware
because that's the key feature in cloud that we have now from Netflix to NASA each of us are using the AWS and Azure
and GCP for different sort of purposes so let's discuss each one of them one by one so starting with the very popular
service that we have called AWS that is Amazon web services now according to the convenience of the providers whatever
the user needs it provides IAS EAS SAS as allin-one. Now what we have it's a comprehensive yet evolving cloud
platform that is provided by Amazon that provides again IAS PA and SAS according to the individual's need or the
organization need. Now the very popular service that we'll be working on first of all that is called EC2 that simply
stands for elastic cloud compute. Now what does it mean? It's just imagine itself uh like a virtual server provided
by the Amazon. So you rent those virtual servers like IAS you get the entire infrastructure you get an a hosted
virtual server where you can perform all the task whether be your devops or cloud related things. So the very first
service that we have is called EC2. Now similarly we have different sort of uh other files that we have to we know we
need to store the files in that or we need to run the code without the servers. So we have S3 to store the data
files in the cloud and we have lambda to run the cloud serverless. Of course, you don't need any server in that. Then we
have RDS to manage the relational databases and we have CloudFront for fast global content delivery. Now, all
these services are very much used by many of the organizations and many of the individuals starting with you can
see here that it is popularly used by Netflix, NASA, Twitch and Airbnb. So these were the customers that we have
for the AWS services. Similarly just as the AWS we have another platform that is called Ashure that is provided by
Microsoft. Now again it is a similar kind of cloud computing platform that offers the solution for building,
testing and deploying the websites or the apps that you are creating. Now the popular service in this is Azure virtual
machines. Uh we will see this demo in the class further. Next we have Azure blob storage to store the large amount
of datas unstructured data mainly and we have Azure functions for event driver again serverless computing. Then we have
Azure SQL databases for fully managed databases of SQL and we have Azure DevOps for the CI/CD pipelines and the
project manage managements whatever you need to do related to DevOps technology. And again the use cases we have is uh
the BMV or HSBC Adobe and the US government it's written here for the instance purpose that these are all the
customers and the organizations that uses Microsoft Assure as their primary cloud provider. Now the next and the
very important and very interesting platform that we have is GCP provided by the cloud. Now what does GCP does? It
provides a similar infrastructure to the cloud users to the users of uh its customers that it uses for itself also
like for its own products for deploying the own products they use the same kind of platform that they are providing and
the popular services that we have is compute engine to launch the virtual servers just like IAS NPS or SAS and we
have cloud storage like it's highly durable and scalable storage and we have bigquery fire store and vertx AI for the
different type of technologies like pabyte scale analytics engine or realtime NoSQL database for web mobile
and to build and deploy machine learning models we have vertex AI. So if you're getting confused in all these terms for
right now don't be panicked because this is just the fundamental video for now but when you will be learning every
technology this cloud technology in depth you'll be understanding each of them very clearly but for now we'll be
demonstrating just two types of service models that are AWS EC2 and virtual machines in Azure. For now the cloud
platform that have been using uh the organizations are Spotify, PayPal, Twitter and Snapchat. So these are some
of the customers that GCP has. These are the like loyal customers of GCP. So these were the kind of service providers
of the cloud. So here we have a quick revision or a quick comparison between all of them. So when we categorize each
one of them, we have AWS, Ashure and GCP. So the computing engine that we have for AWS it's EC2 that is elastic
cloud compute that is used to like make virtual servers or here's the like instance that we have created in EC2
there where we perform all the tasks that we have to perform in the virtual servers. Now similarly as EC2 and AWS we
have virtual machines called Azure VM in Azure and compute engine in GCP. Now talking about the storage for storage
AWS uses S3 and Azour uses blob storage and GCP uses cloud storage. For databases we have RDS and Dynamo DB for
AWS, Azure SQL and Cosmos for Azure and BigQuery and Fire Store for GCP. Again I'm repeating the same thing. If you
guys have any issues in understanding these concepts, kindly stay tuned to the video further because we will be
understanding each of the terms in detail in depth by the trainer. So for coming to the serverless engine if you
want to go serverless in any of the crowd providers we have lambda for AWS Azure functions for Azure and cloud
functions for GCP. Now serverless means that since you don't need any hardware for like having the access to AWS Azure
or GCP there are times when you go you can go serverless also like you won't be needing any kind of server here. So you
can just have the access on your local device to run all the different types of virtual machines or run the different
types of whatever the codes you are using in it. So for DevOps we have code pipeline. Similarly similarly we have
Azure DevOps for Azure and code build and code run for GCP. Now these are the leading forces like the AWS or Azure or
GCP. These are the leading forces behind the cloud providers that we have today. And knowing the all the services that we
have discussed so far will help you to like optimize the you know the right tools or the right uh options to make
the scalable solutions from all these things. So as we have moved further in the video so there's a basic
demonstration for you guys. We have AWS EC2 for the first instance and second we have virtual machine in the Azure. So
let's just take a quick demonstration of each of the following. So first of all, let's start with AWS. As soon as I click
on the video, we have a quick login page here where you can have the credentials or if you don't have, you can uh create
a new AWS account also. But for now, I'm using my personal account here. So as soon as you log from your like
credentials, what happens? This is a default page that stands in front of you where it is written EC2 because we are
trying to create a virtual server here only. We are trying to create an instance. So what I have to do I have to
just click on the EC2 instance and here I'll be searching either I can search for the instances or I can create
instance whatever you want to do. For now I have seen I'm able to see the instance that are running. So it's
showing zero because we haven't created one yet. So let's just create one by clicking on it. So you can see every
instance that would be running or that is terminated or that is has been started will be showing here itself. So
let's just create one by creating launch instance. We just have to click there and here this kind of default page
should be coming to your interface. First of all we have number of instances that we have like you can create 1 2 3
as number of instances as you want but for now for the demonstration purpose I'm just creating one. Now we have to
give some sort of name and tags for our virtual server like what kind of server I'm using. So I'm just creating my first
EC2. So this is the thing that I'm creating. Just give your name a. Okay. Now we have to create uh you have to
select the application or the operating system that you'll be working on. We have different sorts of operating
systems here. We have Red Hat, Windows, Ubuntu, Mac OS and Amazon Linux. You can use any one of them but for now I'm
using I'm selecting Ubuntu because it's quite friendly for beginner friendly for all of us. Now it will take you time to
roll. Okay. Then we have many types of VM images like whatever you want to use but for the free tier that is eligible.
I'm using the first one that is Ubuntu server. It's a confirmation page but I'm not confirming it because the thing was
already selected in here. So we have a architecture like whatever kind of architecture you want in here. So I'm
selecting a 64-bit because we are just creating it for the demonstration purpose. So here's the instance type.
Now this is very important where you're working with it because we have a lot of instances type be nano micro or you have
small medium whatever the kind of size you want for your instance. So because of the free tier availability I'm using
t3.micro and here's the interesting part that is key login. Now you can create your own key pair as per your demand
whatever is comfortable with you. So for now I can see the whatever the key pairs have been created here I can use it. For
now I'm using my own creep pair or is it not available? Let's just try anyone. So these are the existing keep pairs that
have been already created by some other users who have been sharing this platforms. It's not that it can be
shared with you and me also. It's just a kind of organization or the group that I am willing to share my key pair with. So
it's that thing. Then we have the network settings. You can allow the SSH traffic. You can allow the HTTP or HTTPS
according to your will. And after that you just have to launch the instance whatever the sort of instances you want
to create. After launching it if any issue will occur it will be showing here but if it's not that the instant will be
created. So you can see we have got a success and it's our ID that the instance ID that has been created. So we
have to again go to the instances and here we can see that our instance is ready. Just trying to wait for it. Yeah.
So here I have my first instance. Now it's not enough for us to just create an instance. We also have to connect it to
the machine. Now what happens? We have created an instance but AWS doesn't know it yet. It is showing the instance is
running but it's not connected. So to like have the access to the virtual server to like a Ubuntu type of machine
or the operating system where we will be working on. So we have to connect it first by just clicking on it and we have
to just connect it. Now you can either connect it using a public IP or a private IP. Now this is the you know
deployment model that we were talking about. Whenever we are using a private IP it will be just private for all of us
and it will be just be accessible from my own local platform also like I can open bash or c cmd prompt or whatever it
can be accessible from that also but since right now I'm just using it for the demonstration purpose I'll just
using a connecting public IP but both the IP addresses will be showing on the instance that I have been created. So
let's just connect it and see what happens further. Now as soon as I create my EC2 instance is ready to use now. So
here you can see by the time it's establishing the connection that I having both the private and the public
IPs here. So this is it. Now you can perform whatever the uh operations you want here. Either you are from DevOps or
the cloud technology whatever the instances uh has been created is free to use for every one of you. But one more
thing that you have to remember is that after it is started it is payable. Okay for whatever time you are using it it's
payable. it is having some sort of cost or you can say a value. So whenever you are done like for now we are not writing
anything in it you can just write anything for suppose I'm working on devops or I want to install docker or I
want to see if docker is already installed in here or not. So this is the command that I move with. So it's
similarly as we do in Linux or windows cmd prompt. So that's all for the installation and the demonstration part.
So we can cut it from here directly. But you have to remember that you have to terminate this instance. Why? Because if
this uh keep on running. So as you know that cloud computing is a kind of pay as you go system right. So for the whatever
time you're using it it will be chargeable. So you have to delete it after you have created it you have used
it for your purposes the whatever thing whatever demonstration you wanted to use you have done it. So just click on it
and go on the instance state and try to terminate or delete the instance. Now just terminate it from here. So yes it
has been successfully initiated the termination and you can see in the instance state that was shutting down
and the instance has been gone. So this was the part where we used AWS EC2 instance to create a virtual server to
see like whatever the things we can done in here. Now there we have one more key provider that we have that is called
Azure that is again very popular. So like this then I'll just I have just skipped the login page part. You just
have a login page where you sign up or login using your credentials. So this is the default page that opens right here.
Now we knew that in AWS we were creating an EC2 that is elastic cloud compute instance to create a virtual server. For
here we have a virtual machine. See you can see here all the services that have been provided here. But for working on a
like cloud server for working on a different operating system we just create an another resource that is
called a virtual machine. Either you can click it here or just click create a resource or just click on virtual
machines. So as we were having like EC2 instances in AWS here we are having virtual machines. So the similar type of
thing like you can see that as you saw in the EC2 instance all the things were happening here itself. So the similar
thing all the virtual machine will be created here and the steps are quite simple. You just have to create uh one
virtual machine and you have all the options but we are working on a like hosted platform. So we'll be creating
one virtual machine here and we we will be having the similar steps that we had in that. Well, now you have the resource
group whatever you want to use whatever the name you want to give to the virtual machine the name can be provided the
security type the architecture here there we were using 64-bit here you can also have the same sort of architecture
then we have the administration account you can either use SSH public key or you can use a password like it can be your
username your password and whatever you want so after this you just click on review and create as soon as you do it
what does a does it will review it automatically like if there's some sort of problems or you haven't put something
right so it will display it right on your screen and you will be directly redirected to this page only. So when
you just review and create you will create an entire virtual machine similarly as an EC2 instance that we
created in the AWS. So this was it for the demonstration part that we had in the AWS and Azure because these were
very quite a popular platforms that we had in our minds. So this was the demonstration part. Now as we are coming
to the last section of our video, we have something interesting for everyone of us that is benefits, challenges and
the career paths that we have after learning cloud computing. So let's just dig into it. Now let's just first of all
talk about the benefits that we have along with it. We already know cloud is a kind of like ready to use software or
you can say that it's a very beginner friendly or everyone's like to use this type of technology when we have. Okay.
So coming to the benefits that we have first of all it's costsaving of course as we have seen in the EC2 and Azure
virtual machines also like for the time you are for the duration you are using it or you are accessing it you have just
have to pay for that that amount only it's just like having a 10 GB storage and you are paying for the 10 GB only
not for the 100 GBs like you used to do back in the days when you were having physical IT servers so that's the thing
and it's uh see is the same meaning that you pay only for the time you use and there is no upfront hardware investment
and the real life analogy is simply like having an Uber instead of owning a car when you are new to a city. Now talking
about speed and agility, you see how easily we were able to like launch an instance or launch a virtual machine. So
it's that much easy to launch a resource whenever you want to work with the cloud technology. It just like ordering in
food from the app. Now the scalability see we have seen like whenever for suppose you and me both are using the
AWS EC2 instances there are lot of instances created there. So what does AWS servers does it just automatically
like grows or add the servers to like maintain this much traffic and when the traffic is low like only one or two
instances are there it just automatically reduces the traffic like that. So this is the very good you know
benefit of having a cloud service that it can like expand or shrink like a balloon according to their service. You
don't have to worry about anything in that. Next we have global access. So of course you can use it from anywhere like
you can open your EC2 instance right there or you can open an Instagram just for the instance if you want to have a
clearer version you can open the Instagram anywhere like if you are going to traveling to some other place there
also you can open an EC2 instance or Instagram just like you want. So it's a ready to go system. You can use it
anywhere, anytime at any device. So just like your Netflix, Gmail or your popular app called Instagram. So next we have
security and backup. See cloud providers offer advanced security whether it's being an EC2. So you have a keep your
login area to keep the things very private and very uh you can say secured. So similarly we have in virtual machines
we have username and passwords that we didn't create but you can create one. So it will keep your things in the virtual
machine secured and private to you. So that is the important thing that cloud provides and it's just like keeping your
valuables in the back. So what happens is see the benefit the major benefit that clouds come with it is it doesn't
make it faster. It makes it the innovation very cheaper and very scalable and very global to use. So
these are the benefits that comes along with the cloud. Now since you know that cloud is powerful it's kind of uh having
lots of benefits but it's not the magic okay so we have some sort of challenges in this also talking about the
challenges as we discussed even the security is very much highly provided in this but there are some sort of security
risk that we have in this because we can be having like misconfigured services that can expose the data of one country
to another or one bank to another that is quite a challenge to be secured about. The example can be like public S3
buckets being leaked like the S3 buckets in AWS which is used to store the data and the files can be leaked at times if
not kept properly and if not like kept under the eye. So this is a one of the major challenge that we have in cloud
computing. Now cost averance a very simple example can be an EC2 instance like if you have created an EC2 instance
and you have kept it running like you have done your time but even if you kept it running for a day or two it will be
like causing you a hefty amount to pay because that's the thing like if you're overusing it and if you're not making it
available at the time when it is needed. So you can just overuse your spike bill this so it can spike your bills and the
lack of monitoring can make you pay hefty amounts at times. So again the example is same that leaving your EC2
instance at running. Now compliance and legal it's just a simple thing that some data must stay within the certain
countries like financial systems or the healthcare or the government data should be kept under one government like the
Indian government is having some sort of data that shouldn't be get leaked to the American or Chinese government. So
that's all about it. So it's the compilers and legal terms that we need to take care about when we are dealing
with cloud computing especially if you're working as a key cloud provider. Talking about vendor lockin, see
migrating from one cloud to another can be a struggle at times like EC2 instance or there's few types of services, a few
type of apps that are AWS friendly but you kind of have will be having some sort of issues while working it with
Azure. Why? Because AWS handled by some other company and Azure is being handled by some other companies. These two
things have the some different sort of features or different sort of technologies. So it can be a issue that
when we are working with the AWS friendly app it can be little difficult to work with the Azure in that. Next we
have downtime in environment. Now what do we understand by downtime and internet? See you know that the entire
cloud computing is physical hardwareless like everything that you need is on the servers on the internet. So the very
important thing that you have to be highly dependable on is called internet. If your connectivity is slow or the
cloud provider uptime is being reduced or exceeded so it can impact the real world times. Okay. What does it means
that for suppose you have uh less internet connection while you're running an EC2 instance or a virtual machine. So
you can list you know make the connection server break or few of the commands that you have written in it
might not be able to be saved. So these are the things that can impact the realtime apps in the Azure or AWS
whatever you are working with. So again it is powerful cloud is very much powerful but you have to handle it like
you need to be smart and you need to have a good setup to like make it uh cost control and security proof. So
these are the challenges that we have in cloud computing. Now the last part and the very interesting part for everyone
of us that what are the roles that we have to offer when you learn cloud computing. So you see if you are into
cloud engineering, you love to build and manage cloud infrastructure like you love to like have a cloud infrastructure
and you like to build something out of it, something new or something relatable for everyone. So you can be a cloud
engineer after learning the skills that are AWS, Azure, Linux and Terraform. So Terraform is a kind of a DevOps topic
and similarly Linux is a different technology but it's again an operating system and AWS Azure again that we have
worked on it. So this is the things that you need to be a cloud engineer. Now if you're into DevOps and you like to like
automate deployments on CI/CD pipelines, what is CI/CD? Continuous integration and continuous deployment. So if you
like to have those suit of pipelines automation, so you can be a DevOps engineer. All you have to learn is
genkins, Docker and Kubernetes but very well. Now you can be a cloud architect if you like to design end to-end cloud
systems like you can have end to-end users or end to-end systems that are designed for a particular kind of
provider or a customer. So you can have be a cloud architect in that and you have to learn architecture and deep
cloud skills for that. So that is another thing. Then we have a cloud engineer who handle the data pipelines
and storage and you can see that you have to learn bigquery and azure data factory for that. Now you can learn
anything and everything in detail if you are clean into the cloud and if your fundamentals are quite clear at the
beginning. So talking about the very popular term and very popular job role that is among all of us that is AI and
ML engineer but that to cloud dominant like you train and deploy the models on the cloud. So you need to learn vertx AI
SageMaker and very important language that is in data science AML is Python. So these all are the things that you
need to have for the a IML engineer. Now similarly we have another two job roles that are quite familiar that is cloud
support and cloud security analyst. In cloud support engineer what you do you just manage cloud bus and billing of the
setup. For that you need basic cloud and customer handling and for being a cloud security analyst you do monitor and
secure the cloud environments and for that you need IM firewalls and thread detection. So these are all the job
roles. If you're intrigued into learning cloud more DT, you have to like understand all the skills that have been
mentioned here that you have to kind of be very patient and very skillful with all these skills if you want to have a
career in all of these roles that have been mentioned here. So if you are still interested in cloud with be uh you have
to learn cloud in very depth because these were just the fundamentals that we have covered till now. So that's a wrap
up for this video. I hope that cloud doesn't feels cloudy anymore because we have understood all the fundamentals
that were essential till here. So you know that cloud is not just a trend. It's the backbone of today's digital
world and the digital power for the tomorrow's innovation. So if you're learning cloud make sure you're giving
all the time and attention it needs for from you. So you can learn cloud as much as you can and from this point onwards
our professional trainers our industry experts will carry on this video. Just a quick info guys, Intellipath brings you
executive post-graduate certification in cloud computing and devops in collaboration with IHub, Diva Samper,
IIT Rudkey. Through this program, you will gain in- demand skills like AWS, DevOps, Kubernetes, Terraform, Azure and
even cutting topics like genative AI for cloud computing. This 9-month online boot camp features 100 plus live session
from IIT faculty and top industry mentors, 50 plus real world project and a 2-day campus immersion at IIT RII. You
also get guaranteed placement assistance with three job interviews after entering the placement pool. And that's not all.
This program offers Microsoft certification, a free exam voucher, and even a chance to pitch your startup idea
for incubation support up to rupees 50 lakhs from IAB Diva Samurk. If you are serious about building a future in cloud
and DevOps, visit the course page linked in the description and take your first step toward an exciting career in cloud
technology. We discussed about the data center sprite. uh we discussed that an
organization would require data center and those data centers will hold the will have the servers inside them right
so the servers are are deployed inside the data centers servers are deployed via within the data centers now the
question comes in that what about the data centers which are owned by Amazon web services because your organization
for example you you work for a company like Accenture Vipro HCL L, Infosys, JP Morgan, AIG, Apple. Right now when when
they when these companies start using cloud computing vendors like AWS, Microsoft Azure, GCP, Salesforce, these
companies become their customers. For example, Apple says we want to use Microsoft Azure to uh to deploy our uh
workloads and applications, right? So uh in that case Apple would be the customer of Microsoft Azure or Microsoft. Okay.
So think of yourself as as a customer or or uh you are the employee for working in that organization who a who's the the
customer of any one of these three providers. The next question that comes in is that how exactly you leverage that
infrastructure which is owned by your cloud vendor. Let's suppose you're working for Apple or let's suppose
you're working for uh Netflix. Netflix is is a customer of AWS. Now Netflix uh is running all its various as the
applications workloads on AWS. Now being a part of Netflix you have to understand exactly that how Amazon Web Services has
scattered its data centers or how it is managing its data centers in different parts of of the world because Netflix
doesn't offer its own services only in US. It offers you services in a lot of countries including India, right? So you
have to make sure that how you understand its global infrastructure of AW players. You understand that how they
place the data centers in different parts or or on different countries or different continents. That's where we
disc that's where we the the concept of the global infrastructure comes into play. Global infrastructure means then
you understand exactly that how a cloud provider which is Amazon has placed this data centers and cluster those data
centers in a different uh location or what's exactly not exactly location but exactly how uh these things have been
placed in in let's suppose in Africa and India and US and Europe how they have clustered their data centers in in in
these different countries. So in this case we need to understand concept that's called global
infrastructure. Let's try to understand that. Okay. So we're trying to understand first topic I would say that
of of AWS which is the actual topic. The name of the topic is AWS global infrastructure.
Now it consists of different components different parts like regions everybody zones edge locations. uh we have uh for
example local regions or local zones I'm sorry local zones regions local zones availability zones uh we also have edge
locations but there will be two things I'll be discussing with you right now regions and availability zones edge
locations local zones we'll discuss afterwards okay let's discuss the AWS global infrastructure
let's try to understand that okay So the very first component or the part of this infrastructure includes a
region. Okay. What is an AWS region? An as region is nothing but it's a
geographical region. It's a geographical location or a geographical region or a piece of land
which consists of which consists
of a cluster of data centers. is
owned and managed by AWS.
Okay. What's a cluster? What do you mean by cluster? Some people ask me this question. What do you mean by cluster?
Cluster means group, right? So, a region is nothing but it's a geographical location. It's a basically it's a piece
of land. It can be uh be on the name of any city uh or maybe for example most of most of these are based on
cities or named be after countries. Uh so it can be any name. It can be based on a city like for example in in India
there are two regions Hyderabad and Mumbai. Uh for in in Australia Sydney and Melbourne are two regions. For UAE
they have a separate region. Uh for let's suppose for South Africa, Cape Town there's a there's a separate
region. Singapore is a separate region. Right? Japan is a separate region. A region is nothing but it's a
geographical location which consists of a group of data centers which are being owned and managed by Amazon web services
which is which is a cloud provider. We are the customers right like we let's suppose we work for Netflix we work for
JP Morgan we we use AWS to deploy our resources into so those custom data centers are used by us right but who
owns and manages those data centers or cluster data centers it's the cloud provider okay so a region is nothing but
it's a piece of land it's a geographical location where the cluster of data centers the group of data centers which
are being owned and managed by Amazon web services exist over there. I would say it's a geographic location or it's a
piece of land where Amazon's owned and managed data centers run and they are being operated by Amazon. So Amazon Web
Services says let's suppose in Cape Town I have my three data centers running. So I mark Cape Town as a region right. So a
region is a geographical location which consists of data centers a chain of data centers. Now in these cluster of data
centers are being denoted by a special term. Okay. So what what is that special term that we use over here? This cluster
of data centers that we these cluster of data centers are denoted by a term that is called availability zones.
What's that? It's called availability zones, right? Okay. Availability zones in the short form are called as.
These are the data centers or these are the facilities um consisting
of racks of servers. These are data centers only. Right? So these are the facilities which houses
racks of servers. Right? These are the facilities which consist of racks of servers. Now one
availability zone or one a is equal to one or more data centers.
A single a can have one or more data centers. Okay. And in each region
there are at least three a right to make your data more redundant
to make your work uh applications more available. So basically these are the two terms you have to remember and
learn. A region is a geographical location which consist of a cluster of data centers. These cluster of data
centers are called availability zones. So everybody zones are the data centers only or they the facilities which
consist of rack of servers. One a equates to one or more data centers and each region will have at least three a
there could be more than three a but you'll find minimum three availability zones. Fine.
These are the two terms you have to remember. I'll just give a few examples over here
so that you understand that in uh understand this uh concept in and out. Okay. Now I've just logged into my
YouTube sp console uh to my account. You can be seeing one thing is that once I just go to the top right hand corner
right I'm talking about this thing where you see my uh this cursor over there. Uh in this case you're going to be seeing
that these are the listed regions over here. North Virginia, Ohio, California, Oregon, Hyderabad, Melbourne, Mumbai,
Soul, Singapore, Sydney, Tokyo, Canada, Frankfurt, Europe. These are the existing regions, right? Frankfurt,
Ireland, London, Paris, uh uh Stockholm, uh we have Cape Town, Hong Kong, Jakarta, Milan, Spain, Zurich, right?
Right. So these are the regions. It shows exactly the regions and it gives the the details of the regions and
the availability zones which are owned by AWS. Okay. So in this case if I just go back and show you exactly this is the
map. So as of now they have 31 large regions. So uh presently when when I'm talking to you they have 31 running
regions which are running to 24 + 7 right this is the map that shows up on this map the green dots that you can see
these are the existing regions that's why we say a region is a physical location it's a geographical location
it's a piece of land for example so the green dots you can see those green dots represent regions for example this green
dot dot. This green dot represents Sydney. This one Melbourne. Uh this one Jakarta, Hyderabad, Mumbai, UAE, Parin,
uh Spain. Uh this is for example North Virginia. Uh this is for example Milan, Spain, Ireland. Right? So the green dots
represent existing regions. Each region is a physical location which consists of a chain of data centers or a group of
data centers and those group of data centers are called as availability zones. Now the one that you can see in
the uh in the form of red dots these are the upcoming they are coming soon which means that they will be set up in the
next few months. It's work in progress. Amazon is still setting them up uh because uh launching a chain of data
centers take a lot of time uh and um it is still in progress and very soon you're going to be seeing these data
centers being showing up. So as of now you can see the exact details. There are 99 availability zones. Now each
availability zone can be one or more data centers. Generally we assume it's one data center. Okay. So let's suppose
there are 99 data centers. There could be more than 99 99 data centers in 31 geographic regions. All right. With
announced plans for 15 more data centers or more than 15 could there could be a possibility that one a because in there
few as we consist of two data centers. So 15 or more availability zones in five more regions in Canada, Israel,
Malaysia, New Zealand and Thailand. So I think by the end of this year you'll be seeing most of these uh the
regions being deployed. Okay. I think Canada will show up because uh uh they announced Canada uh region and Montreal
few months back but let's see. Okay. So these are the the the regions which exist and some of the regions they are
still in plan. They will be deployed very soon. Fine. Now um I can also show the the the definition just in case if I
just go this this other link. Um okay so you can see that AWS has a concept of a region which is a physical
location around the world where we cluster the data centers. We call each group of logical data centers as an
availability zone. So that's why I said an availability zone can be one or more data centers and and there are three
availability zones at least in one region. Okay. So AWS has a concept of region which is nothing but it's a
physical location around the world where we cluster the data centers and uh this group of logical data centers are called
as availability zone. Each region consist of a minimum of three isolated physically separate aes within a
geographic area. So each region will consist of three availability zones within a geographical area or in a
region. So that it uh uh they isolate any hardware failures, any power outage. So I think there's a distance of around
50 to 60 km between each each of um the easy that's the the minimum distance. So if if one easy um so the distance
between two easys is at least 50 to 60 kilometers right. So because if if one easy goes
down because of an power outage u maybe because of floods natural disaster you can easily have other regions other
availability zones uh serving you uh the services in form of uh workloads and applications running inside them. Now
what what is an availability zone? And a variability zone in the short form we call as an a is one or more discrete
data centers with the redundant power networking connectivity in an as region right so it's a kind of a an is equal to
one or more data centers which has its own power networking connectivity inside that region okay let me just give you
two two three examples then then I take the questions forward let's take uh one or two examples first thing first we
need to understand that Each region is given a special name. For example, uh the one which is labeled as US East one,
that's the North Virginia. What's that? North Virginia. So basically, they don't go with this uh with the geographical
names. They give their own special names. For example, uh the one which is the US East one. US East one means that
they're talking about the Northern Virginia, right? So for California, it's okay. How they labeled US East one?
North Virginia is on the east coast of US, right? And this was the very first
region on the east coast of US. So it they labeled as US East one. Now by the way, US East one is the very first
region Amazon set up when they started giving AWS uh as or when they they started as a cloud provider. So this is
the Amazon's web services first region. So this this is the on the east coast of US the very first region. So they
labeled as USC one. Ohio is the is the second region on the east coast of US. So it's USC 2. On the west coast the
first region uh is uh California US west one and US west 2 is is Oregon the second region on the west coast.
For example, let's let's take in India, we have AP South one. AP stand for Asia Pacific. In the southern region, the
very first region was Mumbai, AP South one and the second was Hyderabad. So, it's AP South 2. You don't have to learn
these names. So, they have labeled these names. They've given us uh the abbreviation or special names for these
different regions. Okay. So, for example, let's go with US East one which is North Virginia. Okay. Let's let's
take this one. Uh let's go with Mumbai. Okay. Mumbai was very first region uh in India. Launched few years back. I think
2017 or 2018. I'm not sure about the exact date but anyhow every region you have every region is a
separate infrastructure on its own. It is independent from other regions. Fine. For example, if your deployment
resources in Mumbai, they will stay in Mumbai only. They will not be visible. I would say they will not be the part of
Hyderabad or California or Virginia. So assume that every region is its is a complete infrastructure
on its own by itself. So for example, if I use Mumbai and deploy my resources in Mumbai, they stay in Mumbai. Of course I
can access them remotely from other location but they would be the part of Mumbai region that's why you have to
switch dashboards between regions right for example right now I'm in I'm in Mumbai dashboard so I can access my
resources in Mumbai similarly if I switch to California so in this case I'll be uh I'll be accessing my
resources in California understood so each region is its own uh it's it's infrastructure on its own. I would say
it is a complete infrastructure where it's independent from other regions. That's the reason reason you have to
switch back and forth between different regions to access the individual dashboards. So if I deploy my resources
in California, they stay in California. If I have to let's suppose access my sources in Sydney, I have to switch to
the to Sydney's dashboard. That's reason uh that's the the reason once I switch between different regions the link or
the URL changes at the top right so every region is is isolated or it's kept separate from other regions uh if I say
uh it it is a separate entity this means that it it has its own identity right so to access and deploy resources in Sydney
I have to explicitly go to Sydney switch to the dashboard and start launching my resources inside Sydney. I can't deploy
resources uh in Sydney while just using Melbourne dashboard. I can't do that. Okay. So that this thing you have to
understand. Okay. Let's let's take the example of Mumbai. I was trying to give you some examples.
Mumbai is labeled as AP South one. This is a special name assigned to it. Okay. Now let's suppose I want to show you uh
we I I want to see the availability zones in Mumbai. I need to see the availability zones inside Mumbai. So I
go to EC2. E2 is stand for elastic compute cloud which helps me to deploy the virtual machines in the Amazon's
cloud. So if I want to deploy my virtual machines or instances or EC2 instances in the cloud, I use E2 dashboard which
stand for elastic compute cloud. Very well. Now if I just go to Mumbai right now, let me show you the Mumbai region.
Okay. So in this case, this is the region of Mumbai which is labeled as AP South one. What's that? It is AP South
1. Okay. Inside AP South one, I have three availability zones. AP South 1 A, AP South 1 B and AP 1 C. Right. So what
is APO 1? It is the the region of Mumbai. This is the one. So APO 1 A, 1 B and 1 C are the three availability zones
which are interconnected with highspeed fiber optic cables. Okay. So these are the three clustered interconnected
availability zones which operate inside Mumbai. Fine. I'll give you two two or three examples uh two or three more
examples to give you some more clarity. Uh let's go to Sydney. Sydney is labeled as AP Southeast 2.
Okay, let's go there. Now AP Southeast 2 is which region out there? AP Southeast 2 is is the region which is the S uh
Sydney. So uh AP southeast 2 is Sydney uh or the region of Sydney. So AP southeast 2 A which is this one 2 A 2 B
and 2 C these are the three interconnected regions sorry availability zones what
are these I'm sorry of time these are what are these these are the three availability zones or three a fine these
are three which are interconnected with highspeed fiber optic cables And where are they being deployed? They launched
inside this region where Sydney is AP Southeast 2. So AP Southeast 2 is the region of Sydney which consist of three
interconnected variability zones. AB Southeast 2A, AP Southeast 2B and AB Southeast 2C. Let's
take one more example. Let's uh jump to the next region which is Frankfurt. Frankfurt is EU Central one. If you
click on that, you will see that immediately the page will change. The details will change automatically.
So Frankfurt is represented using this term which is EU central one. It's the central part of Europe. Frankfurt in
Germany. So EU central one. It consists of three interconnected availability zones. They're labeled as U center 1 A,
E 1 B and U center 1 C. So these are the three as interconnected and located inside EU central one which is a
Frankfurt region. So a region is a is a it's a geographical location which consist of at least three as or
availability zones. These availability zones are represent group of data centers. There are at least three as
inside a region at least. Um I'll show you North Virginia consist of most of the ACS. If I go to North Virginia, the
very first region which was set up by Amazon Web Services. So North Virginia is labeled as US East
one. What is that? It's the US East one. Fine. It consists of these six availability zones.
1 A 1 B 1 C 1 T 1 E 1 F. So six availability zones inside the region of North Virginia where is US east one.
Okay. So this is the concept of regions and availability zones. Now the thing is uh I know this question will come up um
that uh how we achieve high vibility between them. Can we use all the regions or all the
availability zones? The answer is yes. Um we have to make use of some tools like load balancing, global accelerator,
route 53, autoscaling to disperse and scatter your resources across these as regions and use them as
a part of a single application. How they cluster the data center? Uh so basically the data centers are
interconnected with highspeed fiber optic cables. That's how they cluster them.
Okay, I guess this is clear now. How to make use of all of them or some of them? We're going to be discussing the
services based on the services that we discuss. We're going to be understanding each one of these different uh u I would
say how to use them. Right now we we just seeing them how that these are the list of regions, these are the list of
availability zones. But how to make use of them? We'll be discussing that when we start uh with the process of
launching of services. Right? For example, if I deploy the servers or virtual machines, I can explicitly
choose the region of my choice and the availability zone of my choice. I can I can do both. But once we start uh with
the launching process, we start understanding that stuff how to use them. Okay. So to access the
availability zone, you have to go to EC2 dashboard and there's the service health status or service health dashboard. This
is where you can see a list of availability zones inside the region. All right, that's about the the stuff.
Very well. So guys, let's get started with the next topic which is quite important for us to understand which is
the the topic name is EC2. Uh now the EC2 as a topic is quite uh extensive. Uh it requires a lot of
things to be discussed. So we're discussing all of all of those things one by one include the basics like
instance types AMI and all kind of stuff. All right. So what is easy to what exactly that that concept is all
about. Now when you deploy when you start launching your services and applications it is quite important that
you start working on a a concept that's called easy to or elastic comput cloud. Let's try to
understand that concept. It is uh quite important. So elastic comput cloud what do you mean by that? Okay let's let's
understand what is the full for full form of it and uh what's the exactly the concept is all about elastic compute
cloud means that you get the resizable compute capacity in the cloud. Elastic means it can be resized. It it is quite
flexible to use. Okay. Elastic computer cloud. So elastic computer cloud is a service. I explain
you the the definition in in depth. So try to understand the definition because I'll be explaining everything in depth
uh to you so that you don't face any trouble uh when while working on the service. Uh basically we'll we'll start
uh with the um uh with the uh basics including the AMI and the test types because those are the the two basic
things we have to discuss first. So elastic comput cloud basically it provides resizable compute capacity in
the cloud. It provides you with the resizable
compute capacity in the cloud. Resizable compute capacity
in the cloud. What do we mean by resizable? I I'll come to that. I explain you every single
um word of this definition to you so that uh later on you understand exactly what
it means. So it provides you resizable compute capacity in the cloud. Okay, let's take one example.
Let's imagine that this is the AWS cloud that I've I have which in which
I'm running an I'm running a virtual machine. This is a virtual machine running inside Amazon's cloud or Amazon
Web Services cloud. Now, this virtual machine is labeled as an EC2 instance. Now, Amazon says that
this is an EC2 instance. What's an EC2 instance? It's a virtual machine or it's a VM
running in the AWS cloud. So basically it's a virtual machine.
It's a VM running inside Amazon's cloud. It's a virtual machine. It's a VM which is running inside Amazon's cloud in in
AWS cloud. Fine. So an ES is is a virtual machine which is running in the AWS cloud. That's a virt uh that's a EC2
instance. Now this ES has some compute capacity. What do you mean by compute capacity? Imagine you want to buy a
smartphone or maybe uh a tab, maybe a computer, laptop or a desktop. You look for its physical specifications.
What do you mean by physical specifications? For example, the amount of storage, RAM, the processor it has,
um the graphics. For example, I want to buy an Apple laptop that's MacBook. So, I will see whether it's a Intel chip or
silicon chip. uh if it's a Windows laptop I'll see whether it's a it's the uh for example uh Intel laptop or it is
based on uh for example there are multiple chipsets which are available in the market uh what's the amount of RAM
I'm getting uh what's the the processors it's giving to me uh the storage whether it has dedicated graphics or not right
how many what's the amount of cores it has right nowadays we uh go for eight core 12 core chip. So we go with a
compute capacity. Uh that's called compute capacity because we uh that compute capacity offers at how powerful
that machine is. Right? If I go with an 8 core 32 gig 8 core CPU with 32 gigs of RAM, 2 TB of of SSD storage, it's a
decent performance I'm getting from a laptop. So this is what you do when you when you
go and buy a computer for yourself. But now now you're not going going to buy a computer. You're going to go for a
virtual machine. See whether it's a virtual machine or it's a computer. Virtual machine is a computer only. It's
a it's a server only. Server computer virtual machine. It's the same thing. It's only the the only thing is that
because you can't see it physically in front of you. It's virtual in nature. So a virtual machine has some compute
capacity. Similarly as if you see the computer capacity of your mobile phone of your laptop or the desktop a server
anything you want to buy. So in terms of so this here compute compute refers to the compute capacity refers to
here the compute capacity refers to let me just uh correct it over here with using the full
term here the compute capacity refers to the amount of what's the processor you're using for
example um uh for example Let's suppose it's the amount of RAM
coursees storage graphics,
network performance, etc., etc. This is called compute capacity. What's
the amount of RAM, cores, CPU, storage, um, network performance that computer, that virtual machine refers to. This is
called compute capacity, right? I've already given you a very layman and common example. Tomorrow you go to a to
a store and buy try to buy a laptop. But you will see what's the RAM you're getting, what's the process you're
getting, how many cores are there, uh what's the graphics it's giving to you, you see all those things, storage to
you. Similarly, in terms of instance, we we talk about the amount of RAM, codes, storage, graphics, network performance,
etc., right? So, we we we see all these things. We see all these things. AWS uses uh basically Amazon uses special
names for everything. All right? In terms of Amazon Web Services, this compute capacity is
represented by a special term. So, Amazon likes using acronyms for everything. So, this compute capacity is
is depicted or decided by a special concept or a special term. That special term or
that special concept is called an instance type. What's this type? It's a virtual
representation. It's the virtual representation of the
underlying Compute capacity offered
by an instance.
What is the compute capacity that this this a specific instance offers to us? This is represented by a special term
that's called instance type. Okay, I'll give you some examples over here. Let's take some examples. Let's understand
some of the examples. For example, there's a instance type which exist in AWS that's called T2.micro.
I explain you every single thing. Don't worry. Uh it is T2 micro.
Okay. Now T2.micro is a special instance type which has this compute capacity. Following is a compute capacity of
T2.micro instance. It has 2.5
GHz of Intel Xeon
processor. Right. uh and it has
one GB RAM. Basically, we use 1 gig or 1 GB the same thing. One GB RAM.
Um it has one virtual core. It's a one core only. I'm sorry. It's a one core.
We call it as one virtual CPU. We use the term this thing over here. Um,
and it you can attach from 8 8 GB
um to 16 I'm sorry it's 8 GBTE to 16 terabyte
of solidstate disk or hard disk drive storage. And it offers to you low
to moderate network performance. Okay. So these are the physical
specifications of an instance type which is TD.micro. This is free of cost for us. We can use that in a free tier
account. Um I'll show you the list of them. So basically I I'll I'll come to the same concept one more time. And is
is nothing but it's a virtual machine running in the AWS cloud. It offers you some compute capacity in the for the in
the form of amount of RAM, cores, storage, graphics, network performance etc etc. This has been
depicted by something called instance type and assist type is nothing but it's a virtual representation of the amount
of compute capacity a specific instance offers to you. We make use of uh special terms for example T2.m micro uh it has a
the following compute capacity 2.5 GHz of Intel processor with 1 GB RAM one core um one core CPU you can attach from
8 GB to 16 TB of SSD storage and gives you that moderate network performance is it basically is it is decent to deploy
small application fine now I list of them. If I go to my let's let's go to um okay so for example
I go to EC2 dashboard on the left hand side okay I just go to EC2
right now once it is go to EC2 I just go to the instance types on the left hand side
why it's not loading up. Let me just go to different region.
So I just go to EC2 right now. I go to test types. It says loading test types,
right? As if now let's suppose in North Virginia I have 64 test types.
these systems types like it's kind of a model for every instance kind of instance you can deploy. Let's suppose I
let's start the first example T1.micro. Now it gives me one one core it's a one
core CPU. This is the architecture I it uh we can use uh for that. This is the memory. It gives me 6 GB only. That's
it. And very low network performance. T2.micro it give me it gives you it is a one core CPU
with only 1 GB RAM and load to network modded uh network performance. Let's take some more examples out there.
Uh just give a few seconds. Now how you decide this test time it depends on a number of factors the types
of application services you'll be running. Okay for example I want to run a MySQL database. I want to run a MySQL
database application on the package bond instance. Um, in that case, let's suppose MySQ
MySQL vendor says at least one gig of RAM is required. If you want to run a MySQL database upon a on a virtual
machine, I need at least one G gig of RAM. If you want to deploy let's suppose a MySQL enterprise level database, I
need at least 4 gigs of RAM, right? So basically uh the type of application services you should be running upon
these instances you have a list of instance type that you can choose from and whichever closely u matches that
specification you choose that. Second thing is that when when once you start uh launching the service and
applications the because on these distances you'll be launching software applications the software vendor or the
software as a service provider will give you the recommendation that this is the best test we should be using so that our
application can run properly right this is the bare minimum specs that you need whenever you deploy any software let's
suppose forget about his test type whenever Whenever you you uh download any free software opensource software
from from the internet now you see that at least uh 1 GB storage is required uh for it to run properly at least 1 GB RAM
is required so every vendor gives you a specific phys uh specification for it software to run properly whether it's my
database Oracle database uh Apache uh proxy server uh let's suppose you have to deploy a
python uh web application So when you start deploying the application and services um based on
your past experience and also uh based on the vendor recommendation you have to go for the specific instance uh specific
instance types no option of customized testes there are more than 64 instance types okay so Amazon has already created
a a complete list of six instance types you can choose any one of them instance type is nothing it's a make a model of
an instance it's in make and model of instance right don't mix it with elastic comput cloud okay now why it's called
elastic comput cloud I'll come to that it provides resizable compute capacity that is go back resizable or elastic
what do you mean by that this thing this means that whenever you deploy any instance with a specific instance type
it can be changed afterwards okay we'll we'll we'll come to that afterwards but we'll discuss the resizing options later
on because it will confuse you. Do you are merging two concepts together in terms of resizing options?
We have two options to resize our instances. One is vertical scaling. Vertical scaling means
that you can change the instance type. It says go back Okay, for example, today you deploy the
instance with an instance type T2.micro. Okay, which offers you only 1 GB RAM. Now you say 1 GB RAM is not sufficient
for my needs. I need 4GB. Okay. So what you can do is that you can change the instance type
of the running instance right on the fly. Now of course you need to stop this first. We'll discuss the
process afterwards. Let's suppose I want to have 4 GB RAM right. So let's suppose I go with
T2.Xarge. Not I'm sorry. Yeah, sorry. T2.m medium which gives me 4 GB RAM. So in this case
I can in change this test type of a virtual machine and I can say that I want to change it
to T2 dot medium which gives me 4 GB RAM
right this is called this is called elast this is a kind of a one of the concept of elasticity
this is elastic it doesn't means that if you if you choose to make a model of instance type, you're fixed to it.
It's malleable. It's flexible. That's why it's called resizable. Don't confuse that with uh the definition of the
instance type. Instance type is a make and model of the instance
because you can change the instance types later on of the instances. That's what's called
elastic compute cloud instances because they're elastic. They're flexible. You can change the instance
types or the make and model or you can add on more RAM storage afterwards. Okay, this is called vertical scanning.
There's a next type of scaling which is called horizontal scaling. What do you mean by horizontal scaling?
Horizontal scaling means go wide. Now this is ad recommended. Go wide. You widen up.
This is which this is what this is. This has been recom this is recommended by the players.
This is more beneficial recommended. Right. What do you mean by horizontal
scanning? So in this case if you go with vertical scaling what you do is that you have the same physical I'm sorry you
have the same physical server right you're just sharing instance type of it you're going from a small smaller uh
test type with 1 GB RAM to a bigger instance type with with 4 GB RAM you're going back right horizontal scaling
means you go wide which means that overall you want to have 4GB compute capacity it's all about compute capacity
only right we we playing around with the compute capacity now I repeat what's the compute capacity RAM cores storage
graphics performance now uh I want to make sure that uh I want to I want to have the four same 4GB
RAM but I don't want to go I don't want to be dependent on just one instance I don't want to depend myself on just one
instance let's imagine on your instance you're running a Python application a Python web
application you're running a single application upon it,
right? It's a Python web application. The problem with this approach is because it is you are adding more
resources to a single server. What if this single server goes down at any given point of time? Let's
imagine this this single server goes down. Your entire application is down. Your entire application is running upon
this instance. It's down. You have increased the capacity. You you're paying extra amount for for 4 gigs of
RAM. But but because you're just you're dependent you're depending yourself just on a single server. If that single
server goes down, your complete AMP is down. You can't do anything about it. So horizontal scaling means horizontal
scaling is the is the is the more more modern approach. This is basically more conventional and traditional. This is
outdated. Horizontal scaling means that you need more RAM. Why don't we have that uh 4 GB RAM in four different
instances? So instead of So what we do is that instead of having
4 GB RAM in this one instance, I have four instances with 1 GB RAM each. Okay.
I distribute my request coming from for example a load balancer with discuss but I I distribute
my request coming in or the the the data that it has to process it's being distributed between these different
instances the traffic distribution basically the traffic or the request distribution
is done between them requests distribution.
It's been done between them and each of these instances process the request. So basically you're
paying the same amount over here. You're getting 4GB in one server but you're getting one G you're
getting the total amount of 4GB but in different servers. Now the advantage is that you you can place these servers of
virtual machines in different uh regions or maybe availability zones. For example, I can say let's let's uh let's
take this example. Let's imagine that uh the first two servers you can see they are um the first two servers you can see
uh server A and server B right they are in the in the region of for example in North Virginia
which is US East one and inside North Virginia They're in different availability zones
of North Virginia. US East 1A and this is running in US East 1B.
All right. The one which are there instance A and B are running in for
example EU North one which is Frankfurt in Europe. Frankfurt is labeled as EU
north one. So this is the EU north 1 A and this is EU
north 1B availability zone. So what I'm saying is I'm intentionally putting my stresses in different regions
than everybody zones inside them. So they are less prone to failures. I'm intentionally putting them into
different location distribution of the traffic or the
requests are being done between these different instances. So this is a more modern and more smart
approach. This is a smarter approach compared to this one. You're paying for the same amount 4 GB RAM. The compute
capacity you have to pay for is the same thing. But in this case, you are putting that in just one instance.
But over here, you're just putting that into four different instances. This is this concept is called resizable or
elastic. This is called elasticity. You can you can you can change this test type and go to a bigger server or you
can you can have more servers or more testes with the same test type. That's why it's called elastic compute cloud
which means it provides a resizable compute capacity. What's the compute capacity? It's amount of RAM, storage,
cores, graphics you get. You can resize into two different ways. Either you just go with vertical scaling, you change the
you go for a high instance type or you go for horizontal scaling where you go for more instances. Okay, what's the
next thing? The next thing we're going to be discussing is called AMI or Amazon machine image.
Basically the instance type that I discussed with you it is nothing but it's a physical specification
right now I'm talking about the software when you when you when you use a laptop or computer it's a hardware plus
software right instance type is nothing but it's the hardware of the instance or even though it's in virtual format it's
a hardware who decides a software Amazon machine image. Amazon machine
image decides the operating system of the instance. It can can be Linux, it can be Mac OS,
it can be Windows, right? Um, it also decides upon the applications you want to run upon them.
services, scripts, etc., etc. And additional software.
All right. All these things are the part of AMI and AMI is nothing but it's a preconfigured software template which
enlist all the software components of your virtual machine. This is this is a a preconfigured
template. This is a software template. Fine. which consists of the operating
system, application, services, resources in terms of software. So it defines a software right now using a
most most of you are using Windows laptop your Windows operating system plus zoom applications running upon it.
Uh maybe outlook, word, excel, those are additional software you're running some applications upon it. So if you if you
bundle that together it's called an AMI. If we use the same thing in terms of Amazon web services. Okay. Now
let's do one thing. Let's go back. If I just go to the left hand side, there's something called
okay I go to the right now and I there's on the left hand side you will see that there's something called
images AMIs catalog. Now these are the different categories of AMIs that we can use right quick
start AMIs uh my AMIs uh for example I'll discuss with you few few of the things for example these are the quick
start quick start are the commonly used AMIs most of the companies use them most of the organizations use them uh for
example I can show the list of them we have the Amazon Linux uh Mac OS Ventura Montre
Reddit Enterprise Linux, Bixer, uh Ubuntu Server, Microsoft Windows Server. These are different packages or
different um special images with uh special built-in operating system and some some additional applications
drawing upon them. Uh right. So let's go. These are the commonly used AMIs. You can you can you can just filter that
based on the free tier only. Free troll only means that they're only used they can use with a free tier account without
paying any charges. You can browse by Linux and Unix and also you can browse by the the type of architecture.
Okay. My AMIs my AMIs are the one which um are the ones which you create privately.
I'll show you how to to create a private AMI uh which is customized by by a customer. So we can create our own our
own customized AMIs that are called private AMIs. There are some AMIs which you can buy
from marketplace. Um there are some software vendors like Microsoft, SAP, Zent, Cisco, Juniper. You can buy uh the
AMS from them. For example, I want to uh go for Palo Alto. I want to I want to uh deploy a
firewall. Palo Alto VM series virtual next generation firewall. Now firewalls are
used to secure infrastructure. Let's suppose I want to deploy Palo Alto firewall uh on Amazon web services.
So Paulo Alto give me uh some of the details. If I go to the product product details uh or let's go to the pricing
now again uh you will see that vendor will uh let's suppose I want to deploy polo firewall this VM series virtual
next generation firefall five core security subs bundle 2 right my vendor who is vendor in this case Paulo Alto
will give me the recommendation that please use this test type C5N.xlarge X large because Palo Alto in this case
it's my software as a service provider. So being a vendor will give me some suggestions. So it's saying that please
use C5.N9X large instance it it has some the amount of RAM and CPU based on which this
firewall can run properly. Okay. Then um if I just see the list right it will show me the entire list of
uh the supported test type but uh you will see that I can choose any of these test types but C5N9
C5N.Xlarge X large is when recommended. Fine. This is called marketly pay CMIS. Uh for
example, I want to deploy any of the software out there. I can purchase a software uh from the vendor. I can pay
on monthly basis or maybe I can just go for a software subscription for one year or three-ear plan. All right. So this is
the thing. Community AMIs are it's a it's a developer community. Um, it's open-source AMIs, which means that
anyone, you, me, or any, uh, Amazon's account owner, AWS account owner can publish the AMI in the community AMI.
It's a open-source derp community based AMI. You can see that community are public. Therefore, anyone can publish an
AMI and it will show in this catalog. Most of them are used for hands-on purpose, for research purpose. So these
are the different categories of the the uh the software confusion that you can choose from. You can see uh the clear
definition and EMI is a template that contains a software configuration including the operating system
application server and applications required to launch instance. So it's a software part of instance the
operating system services applications scripts that you would like to deploy upon the instance. Okay.
The next thing that we have to understand is that how you connect to your these testes that you're going to
be running. Okay. Let's imagine you want to connect to to the testes. How you connect to this to these testes. So
connecting to your EC2 statuses. Now when you connect your SS you use
certain protocols and port. Okay, these are industry standard. These are not being invented or these are not being
made by Amazon Web Services. Okay. So in terms of connected toes, let's imagine this is the AWS cloud.
All right. So let's suppose this is a Linux instance. Right? Now you want to connect to it.
For example, you're the administrator. You're the admin. Right? You want to connect to the
systems from your computer. You want to gain access to to the systems. Now, why why
would you do that? Why would you do that? Because as an administrator, you need you need to get to the root or the
operating system of the Linux systems. You want to get the root access or operating system access or shell access.
Right? So as an admin I can say that you need to get root shell
or OS access. You want to get to the core of the server, right? Why would you do that? Because as as an administrator,
you have to start with the installation of the software. You have to install some business
applications upon the instance. For example, you need to configure something.
you need to do with something called patching. Patching means running some security updates
or additional software to be to be deployed maintenance right you want to do a lot of things for
that you need to connect to the server remotely why why use it remotely because this instance may be
running in for example in California you're in you're in in Hyderabad in India or maybe in Dubai Okay. So
basically you're trying to connect it over the internet or from far far from far distant location and you want to
gain access to it dut access share access or OS access. So in this case what happens is that if it's a Linux
instance you're going to be using a protocol. Now this is industry standard protocols and ports are being made.
These are being designed or these are being invented few decades back. If you go to Google and type in protocol and
ports, you can see list of them. They're used for communication, right? So the the default protocol that
you're going to be using is SSH. The SSH uses a port number which is 22.
It's a dedicated port number, right? It's a dedicated port number. So SSS stand for secure shell. What is
this? This this stands for secure share right this stand for secure shell. So using secure shell SSH protocol and port
S22 which it uses you gain access to the instance you connect to it right you you you access it
so you can access the the root the operating system of the of the server so that you can perform your day-to-day job
fine uh you can just do the installation of the software configuration patching maintainers So as an administrator,
you'll be getting access to the instance. So you're the internal administrator. You're the IT software
administrator or the I would say that the employee in one of the IT software verticles in your organization. You can
be an architect, a list administrator, a data administrator. You can be a developer, a quality analyst. You need
to get access to the instance from your end. Okay. Now for the authentication purpose we used uh a certain okay so I
think this is understood we used the SSH for Linux now if you run a Windows computer I'm sorry a Windows instance if
instead of a Linux instance this is a Windows instance the operating system of this one is
different right instead of running a Linux instance, you tend to run a Windows instance,
right? This is the AWS cloud that you have and this is the window instance. Now for Windows instance, so basically
you want to gain access to the window instance windows from your end. Fine. So for for this purpose or for this type of
instance instance the protocol that we use is remote desktop protocol. It is called RDP
in short form and it uses the a dedicated port number which is 3389 389.
Okay. So as an administrator you want to get to the Windows operating system or to the root of the Windows systems get
the server access. You need to get to the operating system of the Windows I would say Windows operating system. get
to this get the server access root access then in that case for Windows we use a different protocol which is the
remote desktop protocol RDP which uses port 3389 fine so the type of instance that you're
going to be running as an administrator behind the scenes you're going to be using different protocols and ports port
numbers these are industry standard and they have been used from from past many decades so this is the standard protocol
and and ports we use to connect. Fine. So this is what you do. Now the next thing that comes into this this entire
conversation is the authentication. Authentication means I want to
confirm and I want to ensure I want to ensure that this administrator
is the is the right person who is trying to connect to my instance. It's not the u any hacker or any any unauthorized
person who's just trying to get it to my instance and trying to uh get access to it. Fine. I want to ensure that only the
authorized person can get access to my instance. Right. So you need to authorize
authenticate whenever you just uh uh use your pins, OTPs, ids, passwords. Why why why do we use the them in terms of
backend transactions uh or log into our smartphones devices? Why we use user ID and passwords? for a
reason that only the the authorized person is able to unlock the devices or we we use the user ID and password for
backing transaction so that only the authorized person can make make that backing transaction from a specific
account. Right? So authentications is quite important because I want to make sure that no external user which which
is not authorized can get access to my server. So only my known people can access my instances or the servers. So
what's the methodology we use in this case? The method that we use is called key pair. Okay, it's called key pair.
In terms of key pair, there are two keys. One key is stored on the admin computer. It's
called private key, right? It is saved on the administrator's computer.
So this is my PC, right? So administrator owns a computer. This private key is saved over there.
The Linux instance over here will have a public key. Fine. So private key is stored on the
computer it's owned by administrator and public key is stored inside your Linux instance or the window instance the
mechanism. So basically the the concept is same in terms of authentication right so my instances will have the
public keys stored inside them. Okay. Now when the user tries to connect to any of these testes at the time of
connecting the administrator has to provide these the private key. So administrator says okay I am providing
the private key please match it with the public key right. So the administrator has to provide it.
Fine. So at the time of connectivity the administrator has to provide the private key which is being matched with
the public key. If the if the match is is perfectly all right then the administrator is able to get inside the
instance. You must have seen this thing where uh whenever you operate uh lockers in any
of uh the banks. So for example you have a locker in Indian bank, bank of Botaa or Adra bank, Punjab National Bank. Now
we uh generally people they own lockers in in these banks to store their valuable items. There are two keys that
we have to to open the lockers. One keys with with with bank official, other keys with with you, right? So for example,
you want to open your locker. You go to the bank, you sign the register and you're being escorted to the uh locker's
room uh along with the the bank official. So you use your key to unlock your lock and the second lock is being
is being unlocked by bank official because the bank official will have its own key or his own key. Right? So there
are two keys used to two open lockers. One key is with you other keys with with bank. Okay. The same concept applies
over here to make sure that only the authorized person is able to open the instance and get get inside it or unlock
the instance. One key is with the administrator and the other key key is that private key is stored on the
administrator's computer and the other key is stored inside the instance. So at the time of connectivity this key has to
be provided and if the if the private key matches perfectly the public key the the the contents are uh in align with
each other then in that case the administrators is able to successfully get the remote access to the instance.
Fine. So this is the mechanism we use for authentication purpose. This is the mechanism we use for
authentication purpose. Fine. So this is exactly the concept is we make sure that we try to access the instance remotely
and it's only the authorized person who's who allowed to to get in. The concept is we store the private key the
administrator store the private key in the PC in the local computer. The public key is stored inside the Linux instance
or Linux computer. So the administrator is able to get access to the public uh is able to get access to to the Linux
instance or Windows instance by providing the private key at the time of connectivity. Okay. So I just put it
over here so that there's no confusion. This is the same AWS cloud we have right. This is the
EC2 instance. It can be Windows or Linux. It doesn't matter. In this case, this instance is running upon it. It it
is it it has it has stored the public key. Right? As an admin, you are the admin.
You have a local machine, a PC or or a computer. You have stored a private key inside it.
So you have the copy of a private key. Now at the time of connecting because you want to connect right you want to
connect for authentication reasons you have to provide this key. This key needs has to be provided.
So once you provide the key the contents of the keys be matched with the public key. the the contents of the private key
are matched with the the public key. If they align with each other, then you're able to successfully connect to the
instance. Let's suppose you're let's suppose you're using a wrong private key,
right? In that case, it will disconnect you or stops you from connecting to the
instance because you're using a wrong private key. So it's a kind of you can say that you have to provide the right
key at the time of connectivity so that you authorize yourself as the valid user the administrator who's
trying to connect. Okay. So this is the mechanism used to authenticate. Authenticate means that
you're the right person to connect. This is exactly the stuff. Okay. All right. So we discussed about the protocols and
ports we're going to be using to access the estasis of the servers and to authenticate we use the key pair
authentication methodology. What's that called? Keep pair. Why it's called key pair? One key is with you
other key is in the instance. That's that's why it's called the key pair. All right. This is what we have to use.
Okay. So this is exactly the concepts. All right. So we can get started now with the hands on and this is called the
AWS console that you can be using to perform all your deployments. Every single deployment has to be done using
the AWS account. Now to get started, first thing first, if you are by mistake have gone to some some other location or
service on the left hand side, you can just click on the ads icon and you'll be back on the homepage.
You'll be back on the homepage after that. Okay. And once we're back to the homepage,
I'm just putting off the just turning off the Google Drive. Sorry, uh the the chat box so that we can start now.
All right. So in this case, we need to start with the basics which is getting started with the Amazon E2 Linux
testers. And the very first thing is is is that we have to go to the E2 dashboard. Now how to go to an E2
dashboard? You will see a list of services on on your homepage.
You go to the search menu at the top. Now you can see the list of services in the recently visited section. Okay.
Now if you don't see the list of services you can click on view all services you can see the services like
this and you can see more than 100 services fine so these are the services that you can see on AWS
and on the left hand side you can see there is also a kind of grid you can just go to services and you can see the
list of services based on different categories but it's very tough to just find you
your specific service list there's a search menu at the top you can just search for the exact service you're
looking for and it will show up. So for examp I click on the EC2 right now and this
will just take me to the EC2 dashboard. Okay so we on the E2 dashboard right now.
Okay. So we are on the E2 dashboard. Now the next thing that you have to do is that so we have been step number one.
Let's do step number two. Okay. Step number one and step two. So once you discover the issue dashboard you you
land on this page from this is where you can deploy your stats and also you can deploy other resources like load
balances elastic autoscaling groups um EBS volumes and so forth. There are a lot of things we can deploy. Right now
we need to deploy instance. So we focus on that one. But one thing you need to understand that once you go to the two
dashboard or any service you can do that before or after you need to change to the region of your choice. Right now on
the right hand side you can see a list of regions out there. Okay. These are the regions that you can access to
deploy your instances into because right now we're going to our main preface is to deploy an instance. Now you can
choose any region of your choice but I would strongly suggest you to use North Virginia. Okay. Uh now people will ask
me the reason why why exactly you say North Virginia because there are few regions which has which have some issues
in terms of the free tier resources. See right now uh you guys are using a free tier account.
A free tier account consist of some free resources. All right. For example, is it test type T2.m
micro Mumbai for example it it doesn't have sufficient T2.micro free tier instances.
So sometimes you may bump up to an issue where uh you try to deploy a free instance in Mumbai and you get an error
message insufficient capacity. So there are few uh regions which are problematic uh which doesn't give you enough free
resources very when when you want to launch them. Using North Virginia will guarantee that you will never face
shortage of or or scarcity of sources. Okay. Use North Virginia for your hands down. Number two, staying to only one
region will ensure that you don't scatter your resources unnecessarily in all the regions.
See what happens is that when you start start launching resources in multiple regions at the time of removing them uh
you you you sometimes get confused. Okay. So if you have all these sources in in one single location, tracking the
resources and removing them uh becomes much more easier. See we are not working in a business environment. In business
environment everything is well documented. How many tests are running? What what's how many loop masses are
running? How many buckets are there in each region? How many database are there? Everything is very very well
documented. So of course we're not doing we're not documenting anything in this case. We're just uh using this for the
hands-on. So to make sure that you keep a track of the resources properly and you can remove them when the time comes
stick to only one region. Okay. So in this case put all your eggs in one basket only at the time of breaking
them. You know that you have to drop the entire basket. So choose only one region and preferably use North Virginia. You
will never face any issues. So once I once you choose North Virginia, this guarantees that now the instance that
going to be launched will be launched inside the North Virginia. Okay. So switch to North Virginia region. Step
number two based on the document let me know there let let me know once you have done this thing. Now once you start
launching your resources for example load balancers, autoscaling groups, buckets to create a cohesion or to
create a complete robust infrastructure you have to use that vision explicitly or exclusively deploy all the resources
inside that region. Okay fine. So use North Virginia for all your hands on this will make sure that you don't face
a shortage of the free tier resources. There's no scarcity of them. Plus sticking to one region will will
guarantee that you don't unnecessarily put the resources in other regions and at the time of deleting those resources
it becomes easier for you. Fine. Okay. So this has been done. The next thing is that I'm going to build so that you
become familiar with the with the with the entire web page. I share the step number three and four. Okay. Step three
and four based on the documentation. You can just refer to the same screenshots and uh the documentation. It's very
easy. Step three and four based on the documentation. In the middle of the page, it says launch instance to get
started. Launch an Amazon instance which is a virtual server in the cloud. You go to launch instance. Okay. And it says
launch instance. Click on that. It says name and tags. First thing first, you can put any tag over here. Let's suppose
you put it as demo web server. Now even though it's not mandatory to to
put any tag in this case, it's purely optional. But because it's a best practice to tag your server because
later on tagging helps you to name the servers, recognize them to perform automation and also to extract bill
reports. So initiate the process to launch instance and put any name over there. Demo server, my server, whatever
you want to just type in. It's a simple name that that you're going to just put over there. It's just for the sake of
naming it. You have to start with launching instance. You have to tag the instance right now.
Again this is uh tagging this is not mandatory but based on the best practices business best practices you
should always tag your resources. Okay what's the next thing? The next thing is application and OS images. Fifth and
sixth once you put a tag or you just name the server the next thing is the application
and OS images in a form of AMI. Okay, you have to decide upon that what is the software configuration of this test. Now
in this case we are using we are choosing the operating systems in the quick start you have the Amazon Linux,
Mac OS, Ubuntu, Windows, Reddit, uh DBM, right? These are these are the few choices that I get. I can click on
browse more AMIs to look for more AMIs. I can just go to the drop-own menu. I can see the entire list. Okay, right
now, okay, by by mistake I've canceled this thing. Right now, what you have to do is that
you have to just go to the Amazon the default value. One second, which is the Amazon Linux AWS. Now, this
is the Amazon's own proprietary built developed Linux based operating system or Linux distribution.
It says Amazon Lex 2023 AMI. It's a free tier eligible. Whenever you see the label free tier eligible, this means
that this is free of cost. You don't have to pay even a single penny for it. Okay, there are some of the details that
you can see. Most importantly, you have to see the AMI ID. Now, what is this AMI ID? AMID is nothing but it's the
autogenerated identification code for this image. Right? It can be same for all of us. It
can be different based on regions. So there's a high probability that if you're using North Virginia, the MVO
Linux AMI ID of all of us is same. Okay. So AMID is autogenerated value and uh when we start using templates
like for example cloud formation to automate the infrastructure, we have to provide these DMI ids over there. If you
don't use GUI the graphic user interface that we're using right now the console if you use um automation or CLI you have
to or the command interfaces then you have to provide the AMI using the AMI ids. So right away AMID is not very is
not important for us but for the future reference it is important that you understand this thing that AMI id is
quite important because when you start with the automation of the processes you started creating templates to automate
infrastructure these AMI ids need to be provided at that time. So every image has a has a unique AMI ID assigned to
it. What's next? You don't have to change the architecture. We'll just go with 64-bit x86 architecture. You can
also go go with ARM architecture. We just go with 64-bit x86 which supports a lot of applications.
STS type est time means that as we discussed earlier it is the hardware specification of this. They want a RAM,
CPU, storage. You can see a bunch of rest but we're going to be using going with the T2.micro
which is free of cost and you can see that it is a free tier eligible instance type. So this a free tier eligible
instance type is free of cost for us. We don't have to pay for it. The other three instance type T1.micro T3.micro
but we're going to accept T2.micro as a default system type. So basically you just have to look into the details.
You don't have to change the the AMI. Just go with the Amazon Linux 2023 AMI and just type S2.micro.
What's the next thing? The next thing is the keyware login. I'm talking about the step number based on
the documentation. I'm talking about the step number seven. As for the step number seven, what you
going to do next is that you have to click on create key login. This keep your login is the same concept that we
discussed. Which concept? This one. Private key. It is stored on your computer which has to be provided at the
at the time of gaining connection to the instance and the instance has a public key embedded or it is it stores the your
public key. So the time of gaining connection you have to provide this private key to your instance and if uh
if it matters perfectly fine if if it aligns with the contents of the public key then you're able to get access to
the instance. So to create a key pair there's a you can see that you can use a key pair to securely create your
instance right click on create a new key pair on the just right hand side fine so it says um key pair name you can put any
key pair name it doesn't matter whatever name you you just choose in your case you can just put any key pair name for
example I name it as let's suppose demo or let's suppose wp-kp anything I would say that you can put
any name over here. For example, name it as um Intel server-k I I have a lot of keys stored. So just
to avoid duplication, I'm using kind of a complicated name. You can put any name in the key pair. Okay. Now the thing is
there are two standards we use for the key pair type. RSA and E2519. We always use RSA because RSA can be
used for Linux and Windows both in the private key file format will not change this thing that I discuss with you
tomorrow because the format uh changes uh based on what what's the type of client software you'll be using. Format
means it's uh let's suppose we have the format of the images JPEG, PNG or we have format of the documents like docx
for spreadsheet we have file extension like x lsx for ppb we have pptx so it's a format of the key what's a extension
of the key in what format in p format or ppk format so basically what you have to do is you just have to click on create
new key pair and name the key pair that's We never ever change the key pair type
the industry standard is RSA which can be used for both Linux and Windows estis and as if now don't change the private
key file format okay don't change it what's next the next thing is that you can create key here and the key pair
will be saved on your computer if you go to download section if you go to downloads folder you will see that the
key is being saved the key pair is being saved. Okay, this is a private key, right? This is in printable encoded
archive. So basically this is the the the PM key that you have downloaded. This can be this is a private key. I'm
sorry. Yeah, in in PM format, you need to provide this key later on if you want to connect to the instance. Fine.
Okay. What's next? The next thing is you have to go for the network settings. Steps eight and n.
In steps eight and n what we doing next is that you have to go to network settings first. Now under network
settings you don't have to change anything. This concept is called VPC or the virtual private cloud. Okay. What is
this? We'll discuss this this thing in in depth because there's a complete section dedicated to VPC. A VPC is a is
a your own virtual data center on the cloud. In this case, whatever you're going to launch is is is inside your own
private cloud. It's a virtual private cloud. It's called VPC, right? Virtual private cloud. So AWS says that we give
you give you your own virtual data center, your own private cloud. And when you deploy resources in in that cloud,
those resources are not visible by other customers. Okay. So if I deploy my my resources in
this VPC, you can't see that because it is it is owned by me. So basically it protects my resources. VPC stand for
virtual private cloud. It's a virtual data center which we get inside which all our servers have been deployed. This
gives us security and privacy. Okay. I'm not touching about the networking components as of now because there's a
complete section dedicated towards that. The the important thing that you have to do is that you have to go to security
groups. Now what it says we're trying to connect to a Linux instance or we we're going to launch a Linux server. Now for
Linux we use SSH to connect. Okay. If I just go back to the diagram, if you just go back to this this this this uh this
diagram that I created this drawing, you going to be using SSH as a protocol to connect. This is what you're going to be
using. SS it as a protocol to connect. Okay. Now the thing is you'll be you'll be trying to connect from your machine
from your PC which stores your private key. Now your machine is on the internet or it's is it's in your uh data center
or in your company's network. This PC is getting an IP address. Okay. The IP addresses are unique in
nature. They're unique. No two no two public IPs are same. I want to say that only from this IP address unique IP
address someone can SSH or initiate this SSH connectivity. IP address basically recognizes your your machine on the
internet. Your machine is getting different IP. My machine is getting different IP. No two IPs can be the
same. I'm talking about public IPs. I'm we'll discuss public IPs afterwards. The IP addresses which you machines get are
completely different from others. They are unique. So I'm saying that only my admin's IP address is eligible to
initiate the SSH connectivity because you're going to be using a computer, right? Only your computer's IP address
is recognized to get access to this via SSH to connect to it. So in this case what we're saying is that allow SS
traffic from anywhere anywhere is 0.0.0.0.0 this is a security threat. This means
that any machine on the internet can get SSH access to my instance. You can see that rules with a source 0.0.0.0/
allow all the IP addresses to access your instance. So for security reasons we choose my IP
right now what we will choosing my IP. This will pick the IP address of your computer which your computer is getting.
So that only from this computer that you're working right now you become eligible to connect to the instance. You
can choose a custom IP range but I'll choose my IP. Okay. So this will automatically ensure that only your
computer's IP is eligible to initiate the SSH connectivity to the instance. Then what about launching a website upon
this distance for for accessing the websites we use two protocols HTTP and HTTPS
right when you go to intellipad.com you or google.com or facebook.com you use the HTTP or HTTPS protocol right so if
you trying to host a website you can you can check both of them if you uh now HTTPS is a secure version of of HTTP but
it needs certain permissions. We choose both of them because we're going to deploy a website upon distance instance.
Okay. So basically you don't have to change any one of the network settings on the top. You have to configure the
firewall. You allow the SSH from your computer's IP because you as administrator only from your computer.
It's only your computer from where you can access the instances operating system. Fine. Plus you if you want to
deploy a website upon the instance you allow the SSH uh sorry HTTP and HTTPS traffic coming from the internet. What's
the next thing? The next thing is that you don't have to do step number 10. Straight away go to step number 11.
What I've done is that I given you three documents out of which there's one document which is a script. The name of
the document is this one. Okay. Now in this script we're going to be using to install the website or
Apache web application upon instance. This script will automate the web application deployment upon instance.
Now this is a bash script which will run some updates upon my instance. Install the HTTP. Now HTTPD is for the Apache
package. Start the service. Change the directly to the HTML folder and configure the index HTML page inside it.
Right now you have to copy this entire script as it is. go back to the last category that's called advanced details.
And under advanced details, you have to go to the last category which is which is user data. You need to paste it under
user data. Okay. What's the next thing? You have to straight away go to the 12th
step. Step number 12, you need to just go back and on the right hand side you can see a summary.
Click on launch instance and instance will get launched. On the right hand side you have a
summary table. What's next? The next thing is testing. Step number
13. 13 14 and 15. You need to go back, click on instance ID.
Now this is your instance that you just got deployed. Your instance gets a unique instance ID. There's a check box
which is on the left hand side. You need to select it. Once this once you make the selection,
you will see that there is a tab that is called details. Under details you have this thing called
public IPv4 address or it's a public IP4 DNS name right this is basically used to access the u your instance over the
internet if you're using internet as a medium and let's suppose you want to access the website on it or maybe you
want to SSH to the instance then you're going to be using the public IP address so public is internet what to do next is
that you need to just go Go ahead and copy the puppy. There's a copy icon. Click on this copy button and paste that
in the browser and you will get this thing displayed. So there's a simple HTML uh page we have
deployed and this shows us the result. Now if you if you're not able to browse a page, make sure that if it's showing
you HTTPS by mistake, remove S from HTTPS. Just in case if it says page can be
displayed. Make sure that by mistake you're not trying to browse the https website remove s at the end of it
because to uh configure the https we have to also configure the ss certificate. We haven't done that. Yeah
great. Yeah by default it it got http but sometimes it it gets to https based on browser settings.
Okay. So this is a simple website. You just have just deployed upon all the instance using a script that I given to
you. And uh this is a simple procedure that you have to follow along to deploy your instance in AWS cloud.
Okay, great. Now guys, once you're done with this thing, before you log out, you
terminate this tense. Terminate means that you don't need this this anymore. You want to get rid of it. Now how to
terminate the instance? Now either you right click on the instance you have the options from where you can just stop,
reboot or terminate the instance or you choose the you save the the web server go to test state you can stop it. Stop
means you come back and restart it. Reboot means you can reboot its operating system and terminate means
that you're trying to kill it. You you can't get it back. So either you right click you can see there's a terminate
option or you just go to this test state and click on terminate it says terminate instance are you sure
want to do that you click on terminate and this will lead to the termination of the instance this I think procedures is
laid out in step number 16 17 based on the documentation step 16 and 17
fine we discussed briefly about security groups and um I'll uh I should just deep
dive in inside that uh or dive deep into that concept and make you understand exactly what's the purpose of security
group is all right so basically we want to understand this thing is that security of our resources is uh our
priority right when we go to cloud and we start launching our resources into the cloud uh our applications websites
um um databases uh backend servers processing servers, whatever you want to launch uh inside
cloud. We one of the priorities that we have is security, right? Priorities means that uh in in terms of priorities,
we mean that we want to make sure that we have all the things achieved and we get the same c sense of uh reliability,
security, high availability as we would expect in our own data centers. Right? So you're shifting your entire
application and services and you're putting them on the cloud. Now you want to achieve the same st of same uh type
of security, reliability, performance, right? So you have expectations in in those areas. The question that comes
into this into this entire place is that what are the security mechanisms that we may employ or we can engage so that our
instances and the data can be can be easily protected. The very first thing that you need to understand is that in
terms of the the resources provisioning and security. One of the first things we can use is the keys, right? Private key
and public key. The concept behind private key public key is that you have the private key saved on a computer,
right? So in this case only that that individual that has a copy of that private key would be authorized to
connect to the instance. Connect means to get the remote operating system access. You can get into the machine,
get to the shell, get to the operating system and you can perform installation, configuration, patching, maintenance. So
you want to do server management, you want to get the root shell or OS access. So if you have the exact copy of the
private key saved on a computer and you provide that private key at the time of uh gain connection then that case you're
being authorized to gain that access. This is one of the security mechanisms that we use. Fine. The second thing that
we we use is the protocol and port. This is the second level at which we apply the security. First level is keys.
Second level is that we apply the security uh on two protocols SSH and RDP because remember if you want to give the
if you want to get the admin type of access admin type type of access to a server or to a remote instance for Linux
we use SSH which stand for secure shell it uses port 22. So to get a remote access to Linux server you can be using
SSH secure shell port 22 request being sent towards instance and for uh the windows we make use of remote desktop
protocol or it's called RDP which uses the port 3389. Okay so protocol and port this is what what we're going to be
using. This is what we're going to be using to gain access to Linux and Windows
instances respectively. This is these are the two protocols SSH port 22 and RDP port 3389. This is what we're going
to be using. These are the two uh protocol inputs that we shall use. Fine. So you you also need to make sure
because this this these two protocols give you the administrator access, core server access, shell access, root access
to your server. You need to make sure that it's coming from um the water type. I would say known IPs,
right? So, it's not that anyone can send the SSH traffic to the Linux instance and get access to it, right? Uh these
two different types of request or protocols need to be restricted. This means that only handful of people I I'm
aware of should be able to initiate these two connectivity options or connections towards my instances. If
there if let's suppose there's a there's an there's a hacker who wants to get inside my server he he or she tries to
use SSH RDP image JT because uh the IP addresses that I've listed the IP addresses is not there in the list they
will be blocked immediately right so apart from the private key I also make sure that the SSH and RDP comes from
known sources from known people okay so that's where the security group comes into this entire entire conversation
what is a security group let's try to understand let's dive deep um into this uh this concept and try to understand
that so what is a security group okay so a security group is nothing but
it's a virtual firewall It's a virtual firewall uh which
restricts uh the incoming and outgoing
traffic. So basically this is a virtual firewall is a security group is nothing but it's
a virtual firewall through which you can impose restrictions in terms of the the traffic that's coming in or it's going
out from distance. What do you mean by that? So for example, this is uh my AWS cloud,
right? This is the Amazon web services cloud that I have and um this is an EC2 instance that we are
running. This is an EC2 instance which we are working on. Fine. Now I want to impose restrictions in
such a way that any traffic that's hitting my instance right so if if a traffic is coming from
outside and it's reaching instance from the instance perspective which what type of traffic we are dealing with
right um it is incoming it could be inbound. So these are the
three terms but the meaning is same or it could be something called increase. Okay. Incoming traffic, inbound traffic,
increased traffic. That's the traffic that's coming inside instance. So the traffic is coming from outside. It's
hitting the instance. Think from instance perspective. If you are the instance and traffic is coming and
reaching you from your perspective that traffic will be the incoming traffic, the inbound traffic or the ingress
traffic. These are the three words with with the same meaning. Okay. What if the if the traffic is leaving the instance,
the instance is given response back to the user. If the if the traffic is being it's been it's left out, it's being sent
out from the instance. This is the address traffic. Let's say it's let's outgoing
outbound what's called egress or egress right this is leaving distance this is
basically distance is responding and the traffic is going out from instance. Fine. So, agress outgoing um or
outbound. So, these are the two type of requests that we want to restrict. Okay. So, a security group basically is a
virtual firewall which performs restrictions at two levels. Number one incoming, number two outgoing.
Fine. Need to we need to perform restrictions. Now, to perform restrictions, we use three filters. What
are the So, what are the those filters we're talking about? filters that we use during our school days. Now we used to
uh do the experiment of sedimentation uh condensation, filtration. Now we used to to we used to have a beaker of glass
and then we used to have a sand paper or kind of a a bloating paper and we used to uh the the the water has used to have
a lot of mud and sand inside it maybe the sand particles and then if we just put that on a filter paper on the sand
or the that bloating paper or sand paper then that case this the sand will be filtered out and the pure water goes
inside the beaker. Okay. So that's we use that filtration process the same filtration process we we we kind of
filtration process we apply over here we saying that we filter through only the only that traffic which we want. Okay.
So there are three types of filters that we applying. Number one the the very first filter that we have is
the protocol. The second filter is port and the third filter is IP address.
These are the three filters that we make use of. Okay. Uh protocol, port and IP address. These are three filters we use.
Based on these three filters, we restrict the the traffic that's coming in or the traffic that's going out.
Okay. Protocol, port and IP address. These are three filters I will make use of. I'll give you examples. Try to
understand those examples. Uh I'll give you three scenarios. First two scenarios are simple. Right? So uh I'll give to
two scenarios. Uh first two scenarios and then I'll merge the first two scenarios and put that in a bigger
picture or bigger perspective in a third scenario. Let's start with scenario number one.
Try to understand it. It's a very simple and uh even though it is very simple concept, it is it is equally important.
Scenario number one. Scenario number one is that let's suppose uh I want to have the the users
access my web application. So this is accessing or you want to allow users allow
your end users to access your web application. Okay, this is the
scenario number one. What do you mean by this this thing? Let's try to understand and put that
into scenario. Okay, yesterday we deployed a website upon instance that website was was
accessed using its public IP. Now if you would have given that public IP to to any other person maybe if if you if you
have put that public IP in the chat box other person will also be able to gain the same access to your website. Okay.
So what is happening behind the scenes? Behind the scenes we'll just take the same example over
here. Behind the scenes we have the AWS cloud right? This is an easy instance that we
have right which is running the Apache PHP web application upon it right we
deployed uh we would like to deploy an Apache PHP
web application. All right. So this is the uh the the application running a bomb instance. Now
I want my customers who want to access my website. These are my end users. These end users would like would need to
access my site. So what they going to be doing is they would be sending they will be
sending these two types of requests right the first type of request that they would be sending would be the HTTP
request. So HTTP right which uses port 80. These are standard ports. uh these are
already being set for us. We use them. All right. Now, if you want to make a secure access to the instance,
then we use HTTPS. O SS stand for secure. This is this gives you secure HTTP access.
It uses port 443, right? Port 443. So if you want to allow the the front-end users, the customers
who want to uh go to your website, see your products and services, buy items, stream videos, these are the two uh
protocols and ports that you want to want to allow, right? And um in this case once the the instance gains the
incoming request it's it sends it back to the end user in the form of response and the response would be HTTP.
So basically it will send the response to uh the end user in the form of HTTP or HTTPS
outbound right so that response will be sent back to the customers fine so that the these these are the two levels at
which we apply the protocols and ports. What about the IP addresses? See, I don't want to restrict anyone from
accessing my website. Today, for example, intellig.com has its website running. You have the facebook.com,
Google.com, Instagram.com, Microsoft.com. Anyone from any location from any
country from uh any uh location can access these websites. How? Because in that case, we define the source IP
from where this this traffic is coming from. We use a source IP as 0.0.0.0/ 0/0
this means that any IP this means that any IP or from anywhere okay this means any IP this means that any customer
who's getting any so basically what do you use your devices your smartphones your tablets your computers laptops your
devices get the IP address for example if you I'm I'm talking of the people who who are from non background if you uh
either type IP config or command prompt or just type in this. For example, just go to Google and just type in my IP. Uh
it will show your computer's IP, right? What's my IP address? If you just go over there, it will show you your
computer's IP, right? So, you can also do run IP config on your command prompt or status type what's my IP address.com.
It will show you exactly what is your IP address that your machine is getting, right? Whether you go to whether you use
your your smartphone and you just access the internet use a computer or a laptop or a tablet whatever uh device that you
run if it it is on the internet it can access internet connectivity then in that case what happens is that it gets
an IP address fine now when I just put in 0.0.0.0.0/z /z it means it doesn't matter what's the
IP it is coming in please allow that those users to access my website by sending http https request so anyone is
welcomed you don't want to restrict anyone to to uh or if it's a if it's a internet site your company's website
which is only be used by internal employees that's that's another thing but in terms of websites which are
public in nature amazon.com ebba.com, uh, flipkcart.com, facebook.com, microsoft.com, intellipad.com,
they have to allow the incoming http and https request from all the sources. It doesn't matter which IP address your
computer is getting from your computer. You should be able to send these type of request to a certain uh web application.
Okay, that's the uh incoming request. What about the outgoing? Of course, as outgoing, we send back the HTTP response
or maybe HTTPS. Still, it's HTTP only. In terms of outgoing, we're saying the destination address, the destination,
the destination IP is any IP with 0.0.0.0/ any IP.
So, send it back to any. So there's no filter I'm applying. I'm saying please send it back to the customers who who's
uh so basically I'm saying the request can come from anywhere and the request can can be sent out anywhere. So this
means I'm opening this this in terms of IP addresses. I'm not performing any restrictions and this is my requirement.
If I block the IP address uh I I only allow certain IP addresses uh to reach my website only those IP addresses can
do that rest IP addresses would be blocked automatically which is not a favorable thing in terms of uh the
public websites. Fine. So these are three filters we have applied over here. Let's go with the scenario number two.
Scenario number two is that we are talking about now in terms of scenario two. In scenario two we want to make
sure that we uh perform restrictions. Uh we perform
restrictions for admin access, right? We want to make sure that only
the authorized people can get access by SSH or RDP. So in this case for example I have this
this is the AWS cloud which is running. Okay this is the same AWS cloud which is operating which is running right now.
And let's imagine I have two instances, right? This distance is Linux and this instance is
Windows. Right. So I will put this thing in in the middle. Okay. Now I want that I need
my administrators my internal IT administrators to gain access to these SS and get access to to the operating
systems they get they get the root access the admin access. See right now for example you're going to let's
suppose I go to intellipate.com right I go to intellipate.com I'm sending the https request to
intellipad.com and see the website but this doesn't means that interpret is giving me access to the server on which
this website is deployed this website must be deployed on on a linux server maybe a windows server right it must be
running on a server it has it that server has shell operating system everything So intellipad.com is giving
me access to the website but not to the the server or the operating system on which this this website is is running as
an application. Fine. So I only I can only see the front end but I can't see the back end. I only see the front end
but I I can't see the back end. Now you must have heard about this thing in terms of full stack web development or
in the website deployment. There's something called front end and back end development, right? Front end means we
only see what we what we get access to. This is a front end. At the back end, the server is running this application.
So the front end is given to the users. I am the customer of Intellipad and Intellipad.com says we give you the
front end access but the back end access is with Var with our internal IT software people the architects. Right?
So in this case we get the front end access. What about the uh administrator access to the
server or I would say access in terms of uh getting access to the operating system shell so or server management so
that you can perform day-to-day patching up upgradation deployment and so forth. So having said that let's imagine this
is my office building. Right. Um, this is my on premise. This is my branch office.
Right now, inside this branch office, I have a bunch of employees that come into anyone one of these categories.
solutions architects, administrators, uh they must be for example developers,
right? They need to access to the server on daily basis, a frequently basis when whenever they want. Let's talk about the
Linux server first. They would like to initiate the access for the for the Linux server. The axis that they they
will initiate will be the SSH request which uses port 22. Okay. They should be able to send the
SSH request to the Linux server and uh the connection will be established. The question that releases in this case is
that from which IP address we I will initiate. I should not this is you should not allow
in from from business point of view you should never ever allow the the SS request from this
IP instead you should be allow you should allow that from known IP address for
example my branch office is given this IP address uh sorry 54.78.1.0 0/24
okay /24 means that it's a block of IPs blocks this is a block of two 256 IPs 1.01.1 1.2 1.3 1.4 1.5 1.6
So it's a block of 256 IPs. 256 known IPs. So I want
that only from these 256 IPs which belongs to my branch office anyone can can access to my Linux server out of
these 256 block IPs block of 256 IPs I'm sorry out of the the out of the block of 256 known IPs or a chunk of those those
256 IPs like 54.78 1.0.13 1.4 1.5.6 right up to 1.255
any IP falling is in this range can initiate the SSH access to my Linux server rest of the IP addresses will be
blocked automatically for example if there's a person with with a different IP address so basically this is a block
of 256 IP so it will start from 1.0 go up to 1.255 255. So any machine in my branch office which
falls in this range of IPs will be able to initiate a connectivity request to the Linux server via SSH.
I can also apply this thing to my administrators. Right? So these are the admins in the same branch office.
so they can send the RDP request which uses port 3389 towards the Windows assistance. So these
are the admins in the same branch office. then need to send the request to the Windows server. Fine. So in this
case what we do we ensure that we restrict these two protocols. SSH uses port RDB uses port 3389
and we initiate we just restrict these two IPs. The list of two protocol and ports SSH uses port 22 RW port 389.
These are restricted protocol and ports which should be allowed from a bunch of known IPs. Fine. You can't allow access
from any IP for these two protocols and ports. Okay. So that's how you perform the restrictions to make sure that the
SSH is and RDB is allowed from a bunch of nonIPs. And in terms of HTTP and HTTPS, you can allow the traffic from
anywhere. Okay, let's merge these two two scenarios together. Let's merge these two scenarios together.
So in terms of these two scenarios, let's go with the third scenario. Scenario number three.
This is this merges web application or
admin and website access both. This allows the requests coming or
reaching my instance in two different at two different levels. You get the admin access and the
website access both. This is the scenario number three we're going to be discussing.
Okay, having said that, let's go with this thing. This is the AWS cloud that we have.
Okay, and inside this AWS cloud, we have this Ethernet test running. This is a Linux instance.
Upon this Linux instance, I have the Apache PHP
web application. Right now, what I will do is that I I will just uh have two categories of
users accessing this instance. Category number one, these are my end users
who will be sending either the HTTP request over port 80 and HTTPS request
or port 443 from anywhere. Okay. So they will be sending the request from any IP.
The second category of the users would be my would be my Linux administrators in my branch office.
So this is my let's suppose corporate office. it it is being assigned an IP address of
32 uh 15.1.0/24. I have a bunch of Linux administrators
who are sitting in this office and they need to get the the access by SSH.
So they'll be sending the SSH request or port 22 from this IP address range.
Okay. So I merged these two scenarios together. I merged these two scenarios together. In the first scenario, we we
give HTTP and HTTPS access to the end users. And the second one, we give the SSH and RTP access to the adins. So this
is a Linux instance upon which I' I'm running a simple web application. So because of that because of that I want
to make sure I want to ensure I want to make sure that these two different categories of users can get access to my
instance at two different levels. The front end access is given to the end users and the back back end access is
given to the Linux administrators in my office. So this is through this method I apply security for my servers or
instances or of co and of course uh these administrators will be using the private
keys for further security. So they will have the copy of the exact private key so that they can
authenticate that they are the authorized user to gain connectivity by SSH. Having said that, this is exactly
what a security group concept is all about. Fine. Okay. I'll also show you uh basically uh
and also show you the hands-on of this one as well. Okay. Let me show you the implementation
of it. Let's imagine. So you can create groups in two different forms. Either you could just apply that when you
deploy this test. For example, I go to EC2, right? And uh once I go to the EC2, I go go to launch instance, kill launch
instance. This is what we did yesterday, right? This is this is the step that we performed. If I straight away go to uh
this thing called firewall security groups, you can see that a security group is a set of firewall rules that
control the traffic for instance. So you can click on create security group or you click on select the uh the existing
one and you can see the existing security groups in the list. So I click on create and uh once I uh choose it
this is the name that it will assign to it launch visit 56. Okay this these are default uh names and uh I'll just pick
it up and this is for where I can say allow SS traffic from anywhere which is not recommended. Now for the handsaw
that's okay but from business point of view this is not okay. So I can choose custom source. For example, I can define
the IP address of my corporate office. Okay, I can do that. This is the IP address of my branch office or my my uh
u corporate office and any IP falling in this range would be eligible to send the SSH traffic to my instance. else I can
choose my IP and automatically it picks the IP address of my computer. If you see that 103.215
this is the same IP address which my computer is getting. If if I just go to what's my.com
this is the same IP address which my machine is getting. Okay. So this means that only from this machine I can send
the SSH request to my server or to my instance and if I deploy an application I can allow HTTP traffic from the
internet and even the HTTPS. Okay. So this is the first way through which you can go ahead and this is one
of the ways to one of the two ways through which you can configure a security group at the time of it test
deployment. That's the first way out. The second way out is that if you go to something called EC2 dashboard, you can
see a bunch of options laid out. I'm talking about uh the thing which is called resources. This one under
resources, there's a uh there's a a separate section or or option for secure groups. You can just go over there or
else if you just go to the left hand side uh in the navigation menu you can see that we have the security groups
laid out over here. This one and this one. Okay. If I go to any one of these two places, I'll land on the same page
which is the security groups page. Okay. I will just go to the same page. I will just go ahead and click on security
groups. Fine. I already have a bunch of them. Let's
suppose I want to create a new one from beginning. Now when I create it from beginning, I can apply that at at the
time of is just launching. So I can choose select an existing security group and choose anyone in the list. So for
example, I click on create a security group. The very first thing it will ask me for the name. I can put in any name
over here. face. Just type in Intel uh uh Linux web server. Now, this will give me the
detailed over detailed uh configuration of the of the security group rules. I'll show you that I can put something in the
description. For example, this allows uh SSH um my SQL
and HTTP access HTTPS access. Okay, https
HTTP and HTTPS access. So this is a description basically this just for this
is just for the documentation purpose. So you put a name, you put a description basically in the description. This is
for the documentation. You just type in exactly what you're going to be achieving through this configuration,
right? You put a name, you put a description, and don't go with the VPC as of now. We we go with the default
value. VPC concept. We'll discuss later on. But the VPC means that it's a it's a small network. It's a dedicated network
for you. Okay. We straight away head on to inbound rules. I click on add. So, so there are two
categories inbound for the incoming request or ingress request. This is the inbound and this is the outbound for the
outbound or aggress or outgoing. So for example, I go to the inbound rules. Click on add rule. Okay. Now once I
click on add rules, I can add multiple rules out there. I just go ahead and start adding the first rule. For
example, if I go to the type, I can choose the the list. This is the list of protocols I have. This is the list of
protocols I have. Can you see that? It's a complete bunch of uh it's a complete list of the protocols. Right? So when
when I deploy this instance, I can only choose SSH, RDP, HTTP, HTTPS. But now once I create a new one from beginning
separately, I can choose a lot of protocols in the list. For example, I choose HTTP. Now once you choose HTTP
automatically it picks the port number 80 right and um in the source it will ask me from which IP you want to allow
the h uh HTTP access. So is it my IP? You never do that. You never allow the HTTP from a a known IP. Basically you
choose either anywhere IPv4 or anywhere IPv6. You can choose any one of them. Generally we just go with anywhere IPv4.
It is 0.0 0.0.0.0/ description. This this allows this this allows the end users
to access my website. Okay. So you put a description so that you can easily make it more presentable. Uh you configure
things properly. Basically uh when you start configuring the things in a in a production environment in a production
environment then in that case in that case what happens is that in in a production environment you you want to
make sure that everything is well documented because if you're including a rule you
should be putting in that exactly what that rule means what exactly you're going to achieve from that specific
configuration that you're going to apply I can add another rule for example I can choose https from anywhere via IPv4. So
I want to allow the end users to access my website securely because this is the secure
version of HTTP. Then for example I can choose let's suppose SSH from custom source. I can choose my IP for example.
So I would say restricted let's suppose I choose u a range a custom range. So I choose custom. I choose 54
uh 67 uh 1.0/24. So restricted
restricted SSH access from my branch office.
So only the people in my office will be able to access my website or the web server or the the backend server. the
operating system of it the root access I think I don't have to allow MySQL we'll just remove okay now let's imagine
that you're running a MySQL package upon this this instance MySQL database on the same instance so I can I can just look
for let's suppose MySQL I'm running a a kind of a database package a database upon the same
instance so I choose MySQL and Uh I also want to I want to restrict it. So I can choose the same IP address
54.67.1.0. So this is the restricted database access
my SQL access from my database admins. Right? So I'm saying that only u let's
suppose I'm running a myq database upon this this uh this Linux instance and I I want only my database administrators to
gain access to it. So I can restrict those IPs because MySQL is also kind of restricted IP. You don't allow the MySQL
access from anywhere. Okay. And this is just an just an add-on. So in this case these two I these two protocols are
open. Generally these two protocols are open. HTTP, HTTPS these are open um ports and protocols you lock from
anywhere. Rest of them most of them are restricted including SSH for Linux, RDP for
Windows, even MySQL for databases. These are the these come under restricted categories. Fine. So once I do that what
about outbound? Outbound basically we don't change it because in terms of outbound we don't have any security
threat. It's the incoming which poses security threat not outgoing. So our point says all traffic includes all
protocols and port wedges to all the destinations. Now if you want you can apply the restrictions over there. Okay.
If you want you can perform the restrictions uh out there but I'll not do that. Fine. You can you can
officially tag to this thing but tagging is not that much important in terms of security risk management.
So you understood what we are doing. We are trying to say this is my Linux web server which allows the SSH MySQL and
HTTP access. SSH MySQL HTTP and HTTPS access. And I have opened these two protocols from all IPs.
Anywhere can access my website. But I want to restrict these two protocols because these are uh they give the
special access. It's a privileged access. For example, the SSH access is given so that the users can s uh list
administrators. They can perform something. Uh I want to uh run some uh some database packages. My let's suppose
I'm running a database. My database administrators need to gain access to the database running upon this my this
um let's suppose I'm running server with MySQL package running upon it or a DBN server a DBN Linux assistance. So uh
there are few things I need to do which for which I need to open this MySQL port. So I can say restricted MySQL
access from my database administrators and that's it in terms of what I haven't touched it. All right. So that's about
this stuff. This is what exactly uh the type of confusion I I'll be applying. I click on create security group and this
creates my security group Intel Linux web server. Now at the time of this test deployment I just go ahead and apply the
same one in front of you. So uh I just go ahead and for example I click on select existing secret group under
network settings and look for the one that I just got created until uh this one. This one. Yeah. So I can
choose select existing security group and apply the one that that I just got created. Okay. This is exactly what you
need to do. You can either create a new one but you don't get a lot of options out there. You just allow you just in uh
have these three rules in place. But if you create a security group separately, you can choose a bunch of options, bunch
of protocols and then at the time of this deployment, you choose select existing security group and apply the
one. You can just search for that in the list and then apply the same one that you already have created beforehand.
Fine. I'm going to give you a document a text document which consists of okay uh either I can just give a text document
or this is the exercise it's already there in the Google drive and uh if I copy and paste the content of the
content of this text right now this is the the diy exercise diy stand for do your own do it on your own okay diy
exercise okay this is a scenario so I've already explained you exactly that how a security group needs to be configured.
Fine, this is the scenario that I've laid down in front of you. Okay, this is the scenario that's being in front of
you and uh you have to go through the scenario, open your as console,
try to configure a new security group on your own based on the scenario. All right.
All right. So let's go to this statement. It's a D I have exercise. Let's do do your own exercise.
So based on understanding and and um based on what you know already know you have to apply these steps. It says
create a security based on the following requirements. The first requirement is that you have to go for the inbound
rules. You want to allow the users to access the website running upon this test in such a way that they can access
it safely and the communication is encrypted and quote unquote secure. So you want the users to access the
website. It can be a Python website, a NodeJS website, Apach PHP web application, whatever you want to
deploy, you want to give them safer access. Okay. So in this case I go to the security groups
on my instance dashboard. So basically if you go to dashboard you can just click on security groups on the
resources under resources menu or you can find resources or sorry you can find security groups on the left hand side
under network and security. You can just go to either of these two options. You'll land on the same page. Now you
can create security group and you can put any name out there. For example, u demo
intel web security group. I just put it as sort
for which stand for secure group. So I can put some description for the documentation purpose. um allows
uh access for my end users and customers and the and the um internal employees.
Okay. So I just put a simple description. This is just for the documentation purpose. Access to my end
users and the internal employees. Okay. Let's start with the inbound rules.
Let's start with the inbound configuration. Based on this, it says allow users to access the the website
upon instance in such a way that they can access it safely and the communications is encrypted and secure.
Now, whenever you want to give a safer website access, you always use https https right now you go to intellate
site, it's https only. All right? So you choose anywhere IPv4 and you just say uh allows uh htt
secure access to my website right you uh choose http uh as in in the front end what's
the next thing the next thing that you have to do is that you have to it says as this is a windows server your admins
should access it from the branch office. This is the IP address already paste it over here. So I'm saying that
let's RDP. So RDB is for the Windows server and I paste the IP address over there. So I say restricted
restricted uh restricted SS sorry RDP
access from my internal employees. the people who are working for me uh the
windows administrators. So for my internal employees in the branch office.
Okay. So you restrict it. Now the third one say that this window service is also running a myq database. You want to
allow the internal access from the same branch office and a bunch of internal private IPs. Now this is a bit confusing
out there. It says this is a Windows server also running MySQL database. So it is very obvious that you add a rule
and you just allow the MySQL access which uses both the 306 allow access internally from the branch
office. I already put the branch office IP address over there. I choose custom. So I say restricted
my s I'm sorry my SQL access from
my branch office.
Okay. The next thing it says and a bunch of internal private IPs. You must have some internal servers that
need to communicate with the MySQL database. Maybe there are some threaded web application server, some some
processing servers need which need to send uh some transactions, some data to the
MySQL database. So you need to copy this IP address and you have to add fourth rule
MySQL from custom source and you put the IP address. So restricted MySQL access from
some of my internal servers. In the end point you can allow the this the same protocol multiple times for
different types of for different set of IP addresses. I'm saying this is the MySQL access from a branch office. I put
the branch office IP address and put it across. Also there are some bunch of internal servers that need to access the
MySQL database to send some transactions or some data to towards it. So I allow the MySQL access from a bunch of IP
addresses from the internal ser which belong to one of some of the my internal servers.
Okay. So if you have different categories of IP addresses different sets which can be grouped together you
can put it across in two different rules or multiple rules. So the same protocol can be applied to in multiple rules if
you have different types of sources. These are the four rules that we have to apply in terms of outbound. I'm saying
that uh keep it to default. Don't um do it anything. Go ahead and click on create. Right? So this is the one that's
how you have to create it. This is what the concept is fine. But there there's one important thing I
need to discuss with you before we can move forward. That is launch template or launch configuration. So I discussed
with you the concept of low balancer autoscaling what exactly uh the purpose of having a low balancer is because
basically when you go with the horizontal scaling and you want to deploy more instances for the same
application which supports the same uh infrastructure it is very obvious that you have to make sure that there should
be some component or any type of device whether it's virtual or physical which should be sitting in the middle I should
be uh diverting the traffic to the instances or routing the traffic out it will dividing the traffic between them
equally. For that we use load balancer right so we discussed about the load balancer uh the health check uh
capabilities that it has and um the downside of the of the rob we discussed was that it has no capability to do
deploy more stresses if there's a scarcity of the of them. So if you lose for example one instances or two
instances and you just have one or two instances left out then load baselor will divert the entire traffic to that
leftover instance okay or leftover instances. So in that case you'll be uh putting those stes to exhaustion you put
will be uh sending a lot of traffic to those leftover stes. So to compensate for that autoscaling can deploy more
stresses automatically. It will see the difference between the desired capacity and the actual capacity and based on
that it will launch additional instances for you to compensate. Right? So whatever lesser amount of instances are
there in in the infrastructure it will deploy the same number of instances for you. Then we discussed that the autos
can also increase or decrease the number of stresses based on the principle of elasticity based on the time of day
patterns of the incoming traffic um demand. It can increase or decrease the number of stresses. They make use of
scaling policies as well based on which um for example you have the metrics like CPU utilization the number of requests
coming coming and hitting a load balancer number of packets coming in or bytes
coming in bytes going out these can be the metrics or the parameters based on which we can have the autoscaling to
increase or decrease the number of stasis autoscaling can launches it's which comes under the category of scale
out action autoscaling can also terminate the stasis which comes under the category of scaling. Actually in
both these two cases autoscaling will make sure that based on the metrics we have defined or the time of day or
certain dates it can automatically scale out which means that increase number of instances launch them or scale in
terminate instances or subtract them from your infrastructure. Now all these operations have been done
within the autoscaling group which is nothing but it's a logical collection of identical targets or identical stresses.
So you group these stases or the targets together in a single group. Right? So I give the example that uh the low mass is
getting a lot of traffic. You can just uh choose this metric that the low bass request count per target uh or per
instance is more than 100 request. So on an average the low pass is sending more than 100 requests to every instance in
the group then that case autoscaling can increase more instances and once the amount of requests go below a certain
level autoscaling can terminate the additional instances which are no longer required. Okay. So based on these
metrics as well autoscaling can function for you and it will automatically increase or decrease the number of
instances. We discussed that. Okay. The thing that I that um we need to discuss u is that for example the autoscaling
has to initiate a scale out event. it has to launch the stresses right autoscaling there's a need or there's a
there's a requirement for autoscaling to increase the number of instances right so for example in this case
I just put this over here that autoscaling has to initiate a scale out event and launch 10 instances for you
the question that comes in into into this entire conversation is that if the autoscaling has to initiate a scale out
event and launch instances is how autoscaling will know the software the hardware configuration of those
instances. There should be something right. So if the for example autoscaling has to
launch instances inside the group right there's certain parameters that we need to deploy the instance certain
properties. So autoscaling says I would like to deploy some instances. So autoscaling has decided
autoscaling has decided as a service that I need to initiate a scale out action
or scale out event and based on that event I need to launch instances inside this autoscaling group.
So this is my for example my autoscaling group which is a collection of identical
instances right so autoscaling has to more has to launch more has to launch some additional
stresses inside this group but the question really is that who decides the AMI the Amazon machine image
right uh this test type, right? Who decides the for example keeper,
security group, etc., etc. For example, you have some storage settings.
storage for example EBS or EFS these settings need to be is need to configure them. So basically these
are some basic properties. These are some basic parameters autoscaling needs to put um across these parameters or
refer to these parameters and deploy the stasis this the the simes inside the autoscaling group. All right that's what
it want. So what you do is that you put this entire uh list of properties in a template on a
document. That document is called launch configuration or launch template.
Right? This is called launch tablet or the launch configuration. You can use either of these two things. They're the
same thing. Launch tablet gives you some more advanced options. Otherwise the purpose of both these two different
options launch configuration or the launch template is the same. These are the the these are the documents based on
which it lists down the parameters of or the uh or the properties which is an instance requires uh or we we need or
autoscale requires or needs to deploy the similar statuses inside this autoscaling group. Right? So we want
that in autoscan group all the testes have the same AMI same test type same keep pair same script group same story
settings. So we'll have to list the exactly these parameters autoscaling will pick will refer to these
parameters. Basically it will uh use this uh the document the launch configuration of the launch template and
based on that it will initiate a scale out event and deploy the simmerous testes with exactly the same AMIs test
type keeper secure group whatever properties you defined in the autoscaling group. Fine. So the launch
templates launch configuration uh need to be configured first before you can put autoscaling into action. Fine. So
this is exactly what we need. Now uh in this hands that we're doing we'll be using launch configuration. How to use
launch templates we'll discuss afterwards because uh when you start configuring launch configuration um
Amazon web services will or as will give you notification use launch templates uh they are recommended. Yes, launch
templates are recommended over launch configuration but you can use both of them. Both work perfectly fine. In
launch templates basically you can create the versions and revisions. Okay, that's the advantage you get. We'll
discuss launch temp as a separate topic. So in this hands-on we we'll use a launch configuration for autoscaling.
Fine. Public image. Okay. What's a public image? Public image is the one which are already available for all of
us. It's common for us. For example, if if I go to EC2 dashboard, I go to AMI catalog, right? For example, quick start
AMIs, marketplace AMIs, community AMIs. These are public images which means that if you go on your instance dashboard or
your your Easter dashboard and you just try to access the AMI catalog on the left hand side, these ex AMIs are
exactly the same. It's it's a it's a it's a public offering. I would say that everyone can use it. If you go to uh the
quick start AMIs where you deploy this test, Amazon Linux AMI is is same for all of us. There's no
difference right you can you can also use Amazon Linux 2023 MIMI or I can also use that so it's public in nature which
means that AWS offers these AMIs for all the account owners now the public AMIs are are are fit for hands-on for
development for testing but not great fit for business applications business uh use cases because your businesses
will be uh having their own customized ized images with their I would say with their own inbuilt software installed
upon or I would say they need to have their own packages. I'll give you one example. So for example, let's take one
example over here. Let's imagine you take an you take an EC2 instance, right? This E2 instance
is launched using a public AMI. For example, I'm sorry, let me just uh remove the annotation. Let's imagine I
use one of the public AMIs in the list. Let's suppose I pick Ubuntu 22. Yeah, this one Ubuntu server 22.04. Now, if
you see in your quick start AMIs, this is the same AMI. You can you can see no no difference. This is a basic plain AMI
which is visible on all the accounts. So for example if I launch this instance using Ubuntu
22.04 AMI public AMI. Now this is not been customized based on business requirements. This is a plain
image, a plain operating system. Okay. Now what you do is that you SSH to the instance. You SSH to the instance or you
run a script and you say all right let me install upon it. Let me install for example
uh WordPress WordPress upon it. MySQL so WordPress 5.1.1
MySQL 6.0 Apache PHP application right you install these packages and you
customize them. So you you customize all of them. You customize basically you're trying to
launch a web application. So you customize these packages based on your business. You customize them. You do the
customization. You do the customization. You customize the WordPress site, the MySQL database.
You have the entire data in that in that database. You run the Apache PHP application. So you install all these
bundles. Okay. Once you install all these bundles upon the instance where they will be
installed within the instance you customize you install all the software and and uh customize the software and
your apps based on your business requirements and then as a final product you fetch a private image out of it.
Right now once you get this private image where it shows up let me show you. So
it shows up under this category my AMI. It says created by me which a business creates for its own needs. Because a
business will have its own AMI. The advantage that you get is that once you have this this AMI ready for you,
okay, you don't have to manually log into every instance and every Ubuntu AMI instance and install the same software
again and again. This private image can be taken as um it it can be stored in my AMI space. And
from this private image can can help you to launch
identical stasis, right? And these statuses can be of any stress type.
So you can deploy identical stresses in thousands of any stress type and all these
stresses will have the these all these applications pre-installed upon them. WordPress, MySQL, Apache, PHP, they have
been customized automatically. So you don't have to repeat the process again. You understand the point? So you take
the public image, the basic common image, deploy the instance, install every customized software, business apps
and services upon the instance. And once you once you're ready, take that image from the instance. That instance will
help uh that uh instance from where you take the that image. Once you take the image from that instance, that image
will now help you to launch identical instances with exactly the same software. So all these things will be
carried over and encapsulated or packaged inside the private image. It will be
automatically packaged. Okay, this is what you have to understand.
Enough of talk. Let's start with the uh with the hands on guys. I show the step from 1
to 11. Okay, 1 to 11 or 1 to 14. Okay, see what I'm doing. This script is the same
script that we have used earlier. This is the bootstrap script. Now, this is this will install a kind of a customized
software for us on the instance which is launched using the public AMI. Okay, so we we're going to be using the script. I
launch this test first. I go to my dashboard and I click on launchance.
I name this I can put any name to this instance or I can just leave it blank. If you want you can tag it or you can
just not tag it. It's purely your choice. Now in terms of the application OS images, we're using the Amazon Linux
2023 AMI. Now this is a public image. If you carefully see on your screen on your ads management console this is the same
AMI available to you even if you match with the AMI ID if the region is saying North Virginia because it's region
specific even the AMI ID would be same which means that this public image is the one which is common for all of us
there's nothing private about it fine so this public image you have to use as a Amazon Linux AMI as of now to deploy our
first distance. You don't have to change this test type. Under keep pair, choose proceed without a keep pair. You don't
have to use the keep pair. You don't have to use a key here because we don't have to log into this test. Now, under
security group, under security group, you have to choose allow HTTP traffic from the internet because you have to
access the website. And under the advanced details, you have to copy and paste the script. Now this script will
install the Apache PHP web application for instance. It's our own customized software. didn't give this script
to us. We are we running the script on our own to install the Apache PHP website instance with the customized
index.html page configured. Fine. So we use we just copy the script put that under the user data.
Fine. And that's it. We just go ahead and click on launch test. This is the same concept that we've discussed
earlier. You have to deploy the instance. Click on this test ID. Wait for the instance to launch and then once
this test gets gets in running state, try to browse its public IP before address, you should be able to get a
result back as a web page. Now once you perform the first 14 steps or I would say once you deploy the instance copy
public IP v4 address and paste that in the browser and you will get this result. Now after launching instance
even after I would say 30 seconds or maybe let's suppose 60 seconds let's let's say give it a 1 minute uh buffer
time even after 1 minute the instance does the instance doesn't respondse with the this page then what you have done
you have deployed your customized business application for this test fine now we'll be taking the private AMI from
the instance we deploy this test using a public image installed this Apache PHP application upon it. Now we will take
the image out of it which will store for our future reference. It's kind of a templatizing the things right. Amazon
offers you uh free public templates which are not customized based on business requirements. In that free
templates you add on your own customization and make a private template out of it. That's what we're
trying to do. So we in the previous step we have just launched this tense and u we were able to execute this step
successfully and u we were able to get the this output. So we just installed the Apache application upon this test
using a script. Okay, what's the next thing? We need to get the image from the CS test now
because the C test has been launched just okay now in this case we want to make
sure that we can just go ahead and take the image out of the systems. So in this case
we are going to follow the steps from 15 step 15 only. Okay step 15 only. So in step number 15
what what we going to be doing? Let me show you that you need to take just choose this instance go to actions. So
you have to select the test first. You have to check it. Check it. Once you check your est selected, you need to go
to actions image and templates. And there's an option that is called create image.
You can see the the definition of the AMI. It's defines a program settings that that are applied when you launch
your instance. You can create an image from the configuration of an existing instance. Can you see this thing? You
can create an image from the configuration of the existing instance. So we already configured the Apache PHP
upon the exist the existing instance and now we're getting the image out of it so that the that image will have exactly
the same package included with it. So for example I name this image as HTTP image.
I put a description. For example, this image uh this image
will be used with autoscaling
to launch identical instances.
Right? I put some description over there. Now it says no reboot. So we have to
uncheck it. So whenever we um take the image from from an instance it it gets remove it it is rebooted
automatically and when we take the image from an instance it takes a backup of the image
files in the form of snapshot right what's you can see that during the image creation process easy to create a
snapshot of the above volumes. Basically this is when we uh the the data storage linked with instance we'll back up the
image files. So we want to back them up in a form of a snapshot. Fine. So in this case we want to back up all the
image files in a storage device that's called snapshot. It is a storage service. So snapshot will back up all
those images and the size of that backup is 8GB. Okay. This is what what we this will
cost us 10 to 15 cents. Okay, because I I already told you that for the image you have to pay 10 to 15 cents because
when you back up the uh the image files, they're being backed up in the form of a snapshot. SNAP S O T. So snapshot is a
backup of your data. So the backup of the image files of HGB will cost me 10 to 15 cents. I click on
create image and my image starts getting created. On the left hand side I go to AMIs and this is my private
image showing up. Can you see the visibility? It's private status is pending as of now. If I go to
AMI catalog go to my AMIs. Now under my AMIs this private image shows up and you will see that your AMI ID will be
different. This is the private image. This is not public. Now this is private to you. It's not visible from other
account owners. So you can you can customize the images. You can install um any type of applications upon your
statuses. Customize those application services or the pages and then you create the image from instance and as a
result as a byproduct you get the private image. Okay. What's the next thing?
The next thing is that we have to start with the target group and application load balancer. Okay. So we'll start with
the load balancer deployment. What we need to understand in this case is that if you remember once we start discussing
uh the concept of low balancer we used a very basic diagram and if we put it across basically what
we discussed in in this case was that the losser will forward the the request to a group of instances or
targets. Now what we discussed was that these what are these? These are the targets.
These instances are t okay what are targets? The targets can be anything. They can be EC2
instances. They can be for example um containers docker containers for example.
They can be for example um instances containers. These can be also
your onremises servers which the servers which are running in your own data center. You can also link that with the
with the with the um adop uh load balancer. So target scam instances your containers onremises
servers and there's something called lambda functions. Now lambda is advanced concepts I will discuss afterward.
It is basically a part of serverless infrastructure deployment lambda functions.
Lambda functions basically used to deploy your serverless website. So serverless applications.
So the targets what are what we what we're trying to do is that the load master will start diverting the traffic
to targets. Targets can be instances containers on servers and lambda functions.
Okay. Now in this case we are using the targets as instances. We're going to be using them. Now the thing is load
balancer will group these targets together and it will label or and and that would be called as a target group.
A target group is a group of similar targets or similar instances. Okay. If we group those targets together
or instances together in our case and the main thing is that we configure the health check based on the target group
right we we discuss about the health check basically the health check is being sent from to every target but it's
being configured at the target group level the health checks. So before we can start with the load balancer
deployment, we have to create a target group first and configure the health check for its targets. So how you go
about doing that? You need to just go on the left hand side. On the left hand side you will see that you have
load balancing and autoscaling. Fine. So we just go to load balance balancing. We have load balancers and
target groups. You have to go to target groups first and click on create target group. Okay. Now based on the steps that
I'm going to put down uh in front of you, the step numbers are the following 16
to 20. Okay. What's next? Well, once you start with the uh once once you start with the process to create a target
group, it it says choose a target type. Is it instances? uh are these are instances IP addresses these are lambda
functions or application load balancer what is this so for example we are using instances IP addresses can can can
belong to a containers on premises servers lambda functions are the uh this it's used to deploy serverless
application you can also link one load balancer with the other one as well we'll discuss that afterwards it's an
advanced concept but right now we'll just go with instances in our case the targets equates to instances we want to
deploy. Okay. And we're going to be using autoscaling to manage and scale our capacity. What's capacity? The
number of instances or number of targets. Now once I choose this test I Okay, hang on one second. I
just put cross one more time. Create a target group. I go with this test. I put the target group name as for example
HTTB target group. So I just put a simple name in this case HTTPD
target group. Okay. And I choose a protocol and port um HTTP because uh my target group will
will be receiving this traffic from the load balancer. I don't have to change VPC or the protocol version. Now this is
quite important. Health checks. Let's say that the associated load balancer will periodically send request per the
settings below. to register targets to test their status. Now what I convey to you earlier was that the low passer will
send an HTTP request by default to the targets at the back end and the targets will respond back to the low passer with
200 okay status message. Now if we set a health check path health check path means that do you want to ping any any
document any configured page of your application? If you if you if you carefully see the script, we have
already configured the index.html page of our website. So we we can use this as um a path and I'm saying that I'll be
sending my health check messages to the index.html page of my website and if my website is running properly, they should
respond. There should be a response in the form of 200 okay message. Right? So basically I'm saying please try to reach
the homepage the index html page of my website. So in this case what we're going to be doing is the health check
pingpad we're going to be putting in as forward/index.html. Okay this is case sensitive uh which
means that please don't use uppercase letter anywhere or any alphabet in uppercase. This is this should be lower
case only. I've seen people sometimes put capital I so it is lower case only. Put a forward slash index for HTML. Now
I can just go to advanced health check settings. Right now let's focus on this interval. It's a approximate amount of
time between health checks of an individual target. So there's a default request of 30 seconds which means that
the load balancer will send the healthy request to every instance every 30 seconds.
Okay. And the success code is 200 which means that generally if the instance is healthy it will respond back with a 200
okay acknowledgement message. That's a status uh success code. Right? So the interval is 30 seconds. It's a time
frame between uh the consecutive health checks. So every 30 seconds the the robustessor will perform a health check
towards the towards every instance. You don't have to change these values just for your information. So basically you
have to create a target group. What's a target group? It's a group of targets. The target I will choose instances. The
name of the target group is you put any random name. The protocol port will be HTTP port 80. And you have to configure
the forward/index table as a health check path. Go ahead then click on next. You don't have to do anything over here
and click on create target group and your target group gets created. Now this target group will be added on to to the
load balancer in the next step. Guys, make sure that the the configuration should be exactly the same. uh what I
mean to say is that uh in terms of for example uh the the details I've included let's suppose uh the protocol and port
should be http80 fine uh this is quite important uh because this is the traffic that our
targets will receive and the h protocol is http and you go with the uh lowerase/ index html
okay so the settings should be same as it is. Don't change the settings because um based upon our u website
configuration, these settings are applicable to us. I show you one more time. So if you for example click on
create target group, you have configured everything over here, right? You have configured everything over here. Okay,
you have configured everything on the first page. Click on next. Don't do anything over here in the registered
targets. You don't have to configure anything. Okay, click on create target group at the bottom right hand corner
straight away. That's what you have to do because the targets will be registered afterwards. Will not add on
any targets or instances to the uh this target group. Right? So what's a target group? It's a
group of targets and uh those targets will be added afterwards. right now because we'll be we'll be using
autoscaling to deploy the targets and they will be added automatically based on the confusion. So in in the in the
advanced level once we go to the start configuring autoscaling we'll we'll add the we'll merge the target group with
the autoscaling group. So whatever is autoscaling deploys will show up in this target group afterwards. There will be
an overlap. You you get to know this this u this concept. Okay. Now guys, one thing um once you're
done with this thing, one thing you need to do is that if you go to AMI, uh you go to AMIs, please confirm that the
status of your AMI is available. The status is available. Okay. Now, this means that the image is
created successfully. You can go ahead and terminate the instance that you deployed just for the sake of getting
this private image. This instance that you deployed just for for the sake of launching the app upon it and getting
the image out of it, the private image, this instance is no longer needed. You can go ahead, you can right click or
terminate this test or go to test state and terminate the instance. You don't need this instance anymore. Now the
purpose of having this instance is to just launch this test using a public image. We use the Amazon Linux 2023 AMI
or Amazon Linux AMI. install the Apache PHP app upon it and took the private image out of it. That's it. Right? This
image if you use now from now onwards will install the same app upon uh same operating system, same application upon
the next instances that we're going to deploy. So our mission isn't complete so far. You can just go ahead and terminate
the instance. We don't we don't need assistance anymore. Now the purpose of the system is to get the image out of
it. That's it. Fine. Go to the inst instance dashboard. On the left hand side we have instances
right we have instances. You choose that you have you had uh launched previously in the first step. The first 14 steps
you have to choose that uh instance. Go to test state and click on terminate. Okay just a quick info guys. Intellipath
brings you executive post-graduate certification in cloud computing and devops in collaboration with IHub, Diva
Samper, IIT Rudkey. Through this program, you will gain in- demand skills like AWS, DevOps, Kubernetes, Terraform,
Azure and even cutting topics like generative AI for cloud computing. This 9-month online boot camp features 100
plus live session from IIT faculty and top industry mentors, 50 plus real world project and a 2-day campus immersion at
IIT Ruri. You also get guaranteed placement assistance with three job interviews after entering the placement
pool. And that's not all. This program offers Microsoft certification, a free exam voucher, and even a chance to pitch
your startup idea for incubation support up to rupees 50 lakhs from IAB Diva Samurk. If you are serious about
building a future in cloud and DevOps, visit the course page linked in the description and take your first step
toward an exciting career in cloud technology. Right. So if I go to AMI, AMI is over
here. when I issued the command create image, I got this image, the private image. Create template is nothing but
it's a it's a launch template or the launch configuration. Uh on the left hand side, you will see that
there's something called launch templates. Launch templates, right? Um this is what
we're going to be discussing. Uh the the downgrade version of of launch template is launch configuration. Basically a
launch template is nothing but it's a document. When you when you click on launch
template from the instance basically all the instance settings are being captured in a document itself in a template you
create that template afterwards when you when you when you have to deploy the same instances again and again with the
same settings same AMI same instance type you can use that template. Okay. So that is also one of the options out
there. You can create the image from the instance. That instance will give you a standard parameter uh in terms of what
what type of operating system you would like to have the security groups uh the key pair that can be done in launch
template. But when we create the image, we're able to capture the software side of it. The the operating system plus
apps and services bundled together in a package. So image will create the the private image and launch template will
capture all the details of the instance in a single document and you can use the document again and again to deploy the
similar instances that the difference see right now um we need to understand this thing whenever you create the
private image or private AMI first template doesn't use EPS templates are just a document right in
the in the next step forward um we'll be able to create the document. So that is already included.
You can see that it says create a launch template. Okay. On the fifth step, you'll be able to understand what's the
launch template. Okay. So basically just to reiterate what I said, it's a simple document which encapsulates all the
confusion parameters of the instance including AMI, test type, keep your security group, everything.
AMI is nothing but it's a software configuration. That's it. So you have to images is is is a is a software
configuration or it's a it's a packaged bundled software that you use to deploy the instance. Template consist of the
parameters in the form of a list in a document then that you have to refer to to deploy your instance. So on the fifth
step you'll be able to see exactly see the uh the difference between them. In the previous day we we just uh went
ahead and we started. Now they have deprecated autoscaling. I think they have deprecated the launch
configuration. Go one second. Okay. They I think launch configuration
was showing up over here. It's gone now. They've removed that from the list. No worries. We're going to be using launch
template anyhow. we'll be using in launch template which is the advanced version
of it. So the last uh component that we deployed was the target group. In the
target group we just uh mentioned uh the health check that is the most important thing. Okay. And uh basically
a target group is nothing but it's a group it's nothing but it's a group of targets or the group of instances
under the load balancer. Now this target group needs to be included in the load balancer. So the
next step is that we have to deploy a load balancer and the load balancer should be sending
all its traffic to the to this target group. Fine. Now I will just uh take you to the next step next level and we'll
just create a load balancer or deploy it. If I just go to launch templates. Yeah, I think that's been deprecated.
There was the there there was a lot of uh um updates going on that the launch configuration would be removed. I think
they removed that. That's okay. We'll be using launch template. So I'll show the steps in terms of the
load balancer deployment. Uh I'll show the steps. uh the steps would be show the steps
right now. So I'll show the step I'll go go a bit slow in terms of low baser configuration because um I have to
explain a lot of theory in terms of u how you config the robanser what's the purpose of of every step the
types of robansers so I'll be discussing those things with you as well now right now I'll be going from from step 21 and
just take you up to 21 to 23 only that's Fine. So I'll be going
now in terms of load load balancer you have to understand this thing that once you start uh with the load balancer
deployment to get started the very first thing is that you you have to be on the EC2
dashboard right so once you go to EC2 uh it will just take you to the E2 dashboard right
on the left hand side where you can see some of the menu options you have this thing called load
balancers load balancers and target groups. So I go to loop passes, I click on create. Now I have three main loop
passes in front of me. I can deploy either of these three. Now if I choose any one of these three, the
the basic functionality, the common functionalities of load balancer, I'll be still be able to leverage. I'll be
still be able to access those basic functionalities of a load balancer. Right? But why we have these three
classes? Why we have these three different types of opasses? Even if the if we get the the basic functionalities
out of them because at certain level they are they are be used in different situations in different scenarios. For
example, application robuster is mainly fit for um HTTP and HTTPS websites. All right. So the amplification load
balancer is is the is the most basic and most common form of load balancer that you're going to be using.
In 95% of the cases you'll be using this one. In 95% of the cases you will be using application load balancer because
it fits the requirements. It takes care of the HTTP and HTTPS website. It can work for uh microservices,
containers, um on premises servers. It's a it's quite advanced load balancer
and it is suitable for two types of uh request to to process HTTP and HTTPS. Fine. So in 95% of the cases the
application balancer is more than sufficient. Now why it's called application balancer? Because it works
at the application layer which is HTTP and HTTPS which includes HTTP and HTTPS protocols.
Then we have network load balancer. Now network loanser is the one which is kind of you can say that it's the
advanced version of a bridge load balancer. It can take care of any type of traffic
HTTP, HTTPS, SMTP, DNS, IMAP, POP 3. So it can take care of any type of traffic.
Fine. So let's suppose you want to take care of the SMTP traffic which is for email servers.
It can perfectly work for in those situations. But that's not the the main advantage of
the network load balancer. Net rober is suitable in those cases when you have more than a million requests coming in
each second. You can see the clear definition over here. Operating at the connection level.
The network robes are designed or capable of handling millions of requests per second securely while maintaining
ultra low latencies. Right? So for example, if you have if you have uh uh let's suppose a kind of a
requirement when you're let's suppose going for uh OTD platform streaming service like hot star and uh you expect
that because of certain events uh or IPL or FIFA World Cup match you expect that millions of requests will be hitting
your your load balancer and you also have to maintain ultra low latencies which means you have to bring down uh
the lag and sometimes As you must have seen this thing you have the bit you have the best internet connectivity but
still uh there's a buffering going on there could be possibility because the a lot of traffic is hitting that OTT
platform it's not able to handle that that amount of load so that that buffering is kind of a lag for us so the
network the network load balancers are designed to or capable of handling millions of requests per second while
maintaining ultra low latencies. This means that the users will not face any lag or latency at their end.
So if you have that type of requirement then you could be using the network balancers.
Again I say I'm saying so in 95% of the cases the application master is is is suitable but in few situation like like
I gave you the example of let's suppose Amazon prime day e-commerce websites use the net net flow passes OTD platforms or
for example the food delivery apps like let's suppose Zumato has uh published uh an advertisement that that they're
giving 25% discount on every order for 5 days. So they they they know that a lot of users will be coming in and accessing
the Zumato platform to uh place the orders and get to 25% discount. So e-commerce uh sites, OTD uh platforms,
they generally use these type of robasses in in those situation when they expect that millions of customers will
be coming in and trying to access their platform. gate rub balancer is not being used uh that much. This is used for
third party virtual appliances. Let's suppose you you're running VMware virtual machines on AWS. You're running
VMware virtual machines on Amazon web services and in that case you can make use of gate load balancer. This is the
rarest of the cases where you're going to be using gateway load balancer. 95% of the cases you're going to be
using application and rest 5% network. That's it. in my gate loser was deployed almost uh less than 18 months back but
in in those last 18 months I I haven't uh seen more than one or two deployments of g balancer right why because scalance
is used for third party virtual appliances running on Amazon web services platform like for example
you're running VMware virtual machines on AWS uh very few companies do that because uh
generally the companies when they start uh uh having the virtual machines on on AWS cloud they will go for the AWS own
proprietary instances instead of going for VMware virtual machines. There could be some exceptions. Um sometimes for
example the instances don't support that those applications. So in that case VMware virtual machines support certain
applications uh to be deployed upon them. In those few exceptions you can make use of a gate load balancer. So it
is used for it's it's used to deploy and manage a fleet of third party virtual appliances
which is which is rare of the rare cases. So these are the two load balances that we're going to be using
and this hands-on we'll be using application. Why? Because it fits our requirement. First thing first we're
going to deploy an HTTP site. So and uh uh we don't we don't want to have uh we don't we don't expect
millions of requests coming in. It's only few clicks that we have to make. That's it.
So even if you have the requirement of tens of thousands of clicks of requests coming in per second, the appro is
sufficient enough to cater to that requirement. Okay. So I'll go with the application balancer. Click on create.
Now in this case the very first thing that I have to do is that I have to go ahead and just put a name over there.
For example, name it as httbt load balancer. You can put a simple name. It doesn't make any difference
even if you just put your first name. That's okay. That's a simple name that you can easily change afterwards.
Scheme. Is it internet facing or is it is it internal? Now in this case we will always use internet facing because if
you're using for for um the HTTP websites if you're using this thing
if you're using to deploy a website a low baser should always be receiving traffic from the internet.
Internal balances means that uh you just want to route the traffic for uh for internal internal operations. it will
not receive traffic from internet. Maybe the low pass is is receiving traffic from internal servers.
There could be a possibility when you're using low baser to just receive the traffic from internal components. Fine.
But in this case we always use internet facing because if you're going to use low bouncers for the website deployment
the load balancer should always be internetf facing because if it's internet facing it can receive request
for your website over the internet and forward those requests to where instasis at the back end IP address will go with
IPv4 right because the instances we are using right now they're using the IPv4 addresses which are which are
32 bits, right? So that's that's what you have to do. You have to start with with a low balancer deployment and in
the first two three steps you have to just go ahead and put a name. Make sure that the scheme is internetf facing.
Don't go with internal because uh we're going to deploy a public website and also the IP address type should be IPv4
which is a default one. Please go ahead and do that. Let us know. Why we using internet facing? Because we
want uh the low balancer to receive traffic from public users, internet users, right? So if you
don't use internet facing or which choose internal by mistake, what happens? The low bouncer loses the
capability to receive traffic over the internet. Okay, it will only receive traffic from internal uh internal
servers. Okay. So in that case we should be using the internet facing no answers. What's
next? The next thing is the 23rd step. I'm sorry 24th. In step number 24th we have to go with
the availability zones mapping. What is this? Step 24 only. What we're trying to
achieve in this case is we want to make sure that every available so see the thing is that based
on the best practices being recommended by Amazon web services the best practices uh which as uh suggest you
that these are the best practices that you should be going for. You should be choosing at least two availability zones
the instances of which can receive traffic from the load balancer. You should have at least
at least two a instances should be put across two aces to make sure that you have a highly
available application. Why it is so? Because if one a goes down of course that a goes down the instances will also
go down in that a at least you have the second a fully functional with the stresses running inside it. And in that
case it will guarantee that your uh your application has no downtime. Fine. So it is always recommended that
you just choose at least two ACs within the low passers configuration within the network mapping. Now in this case under
mappings I have a list of five a six as I'm sorry. If you're using different uh region that's that's a reference story.
Whatever is regions are showing up you have to check all the boxes. So we are saying that we want to when whenever
autoscaling launches the instances it can launch instances equally between any one of these as all all these
and the low basser can divert the traffic or route the traffic or send the traffic to any one of these tests in any
one of these. This is this makes means that I'm trying to make my application highly available
at least two a should be selected. For example, if I don't select two a right okay you can see that over here.
Select at least two availability zones and uh the low band set out traffic to targets in these availability zones.
Right. So I will be choosing all the availability zones in this thing so that I make all these authorities inside them
to become eligible to get or receive traffic from the low baser. Okay. You have to set all of them. Please go ahead
and do that. Let me know it's done. Yeah. So choose both the ACS in that case. Whatever as you have uh guys you
must be using California, Mumbai, whichever region you're using, you have to set all the aces.
Yeah, you choose both of them. Rabi, whatever's uh is showing up, you have to check all the boxes. In Mumbai, for
example, there are three. So, you have to make three boxes. But whatever boxes are showing up, you have to check all
the boxes, all the blue ticks. In my case, I'm using North Virginia, which has six easy. So, I'm making six ticks.
all the eases should be checked in your case because I want to make sure that I don't compromise in terms of making my
application highly available because let's let's imagine I have 25 instances or 100 instances those 100 instances
will be scattered between the six availability zones right autoscaling will try to maintain the equal number of
stresses between them of course we have um if we divide 100 by X we don't have the equal number but it tries to make
the makes makes a balance between the these different ACS it tries to deploy equal number of instances in each of
these ACS so this is being done fine all right so network mapping means that it will guarantee that we will deploy the
instances in all the ACS within the region see right now you need to understand everything is happening in
the same region the deployments that we are doing right now the AMI the low band The autoscaling
configuration everything is done inside a region. What's the next thing? The next thing is
step number 25. It is about the security of a load balancer or assigning a security group to the low load balancer
so that it can receive the traffic from the authorized sources and forward the traffic to authorized targets or known
targets. We will not go too harsh in terms of security group configuration. uh we
could um know um create a a trust relationship between uh the load balancer and the instant but right now
we just go simple we'll create a simple security group for the load balancer and um I think this this thing can be
skipped it's not that much important as of now but some people may face the issue okay let's do one thing let's skip
this step because uh what happens is by default the the load baser receives receives the HTTP traffic from anywhere
which we want. Hang on one second over here. Okay, let's do that because some people will start facing the issue if
you don't create a new security group because in that case we have to do some troubleshooting afterwards to prevent
ourel from tr troubleshooting. We can just uh create a new security group from from beginning and ensuring that we have
the right protocol included in the inbound rules. So I show you the step number 25 and um
25 to 29. Okay, see what I'm doing now? I want an inbound rule because I want the
the load balancer to receive the HTTP at traffic from the users because we're going to deploy an HTTP website. What we
going to launch? We're going to launch an HTTP website. So I want the robuster to receive the the same traffic but I
have to open that port at the security group level. What is the security group? A security group is nothing but it's a
firewall um which restricts the traffic which comes in or which goes out from a specific component. right now the one
which is there which says default I don't want that because I don't have the shity or I don't have the confirmation
that it includes that HTTP traffic inbound from from all the sources. So to be a safer site because um I want to
make sure that I don't compromise because later on we have to do some troubleshooting if we we don't have
that. So in this case what you have to do is there's something called there's a there's a line on the top which says a
security group is a set of firewall rules which control the traffic for your low passer. Select an existing security
group or you can create a new security group. Now once you click on this one this will open a new tab for you. Now
don't close the previous tab because people get get confused when switching back and forth between these two tabs.
Okay, you have to click on this option. It says create new security group. It opens a new tab for you. Please don't
close down the previous tab. Okay, click on create new security group. It opens a new tab for you, a new page for you
where you can just go ahead and start creating a secure group. Just on this page, let just type in done once you're
on this page. Great. Now it says that uh security group name. You can put put anything over here. For example, name it
as uh elastic or let's suppose I put this as app load balancer
sg and uh I can copy this and paste same in the description. I put an uh name and
the description. That's it. Fine. Now in the inbound I add a rule where I just allow the HTTP traffic from anywhere
IPv4. That's it. Make these three selections or perform these two three three things. Name the secret group. You
can copy and paste the same name in the description. Don't change it. And include a inbound rule
where you just allow the HTTP inbound from from anywhere IPv4. Please do this thing. What's wrong with this?
Anywhere IPv4. You have to choose anywhere IPv4. Okay. Once you do this thing, you need
simply scroll down. Don't change the outbound. Click on create security group. Let me know there. Once you've
done this thing, click on create secret group. Just have done once you have once you have created this one. You don't you
don't have to change that bump. Simply scroll down at the bottom right hand corner. Click on create security group.
Okay. Now what you have to do is that you have to go to the left tab. You have to go back to the lasses page.
Right? So there are two tabs opened. You have to go to the tab on the left hand side. Okay. The browser tab
and you have to cancel the default one. There's a there's a cross there's a there's a a cross one. You just click on
cross and this gets away or it you you take away the default one. Now there's a refresh
button over there. You hit refresh. Okay. And once you hit refresh, you'll be able to search for the low baser that
you created. Okay. Basically, you have to remove the default one and apply the security group that you just got
created. You have to make sure that you hit refresh, remove the default one and then apply the secure group that you
just created. Right now you have to go back to the previous tab. So basically people get confused. They are on the on
this tab. They will now say okay I somehow uh need to go to the passes. Now there's another tab opened on the left
hand side. You go go over there remove the default one or hit refresh. Remove the default one and apply the one that
you just got created. Yeah. That's okay. you apply the uh you apply the security group first, right? And then you cancel
it out. If you're talking about the default security group coming up again, that's okay. You just
look for the application security group that you just created. You select it and you will have two boxes. You have to
cancel the default box. That's it. So what we have done is that we have ensured that the load balancer has only
one type of request coming in which is the HTTP request from all the sources. We haven't included any other protocol
for example HTTPS because it's only for the uh secure website. We have to basically also go for uh a sec uh a
certificate. We have to upload a an SSLTS certificate. We have done that. So any other protocol is not allowed
whether it's HTTPS, it is uh MySQL, IMAP, it's only the HTTP request which is coming in. That's it. Done.
Okay. Now what's next? The next thing is that you have to go with the step number 31. I think that's yeah 31 31 and 33.
Okay. So in this case when you go to the uh when once you scroll the page down it says listener
routing. Now a listener is a is a process of listening to the request. What I want loasser is to um to ensure
that it listens to the http8 traffic and forwards the request to a target group. Now which target group you're
talking about? It's the one that we we created in the previous step. If you just go to the
drop-own list, it shows us the target group. This is the same target group that we created earlier in the previous
step. So I'm I'm saying hello bouncer, please listen to the HTTP uh 80 traffic and then forward that that traffic to
the the this target group which will eventually consist of my targets or instances.
Fine. Once you go once you go ahead and do this thing simply go ahead and click on create
robans and your robas gets created. Let me know let me know with this thing based on the document you have to go
with the step number 31 to 33. And if you just click on view load balancer your load balancer is now in the state
of provisioning. You can see that the state is provisioning. Not able to uh see target group. Can you
There's a refresh button. Can you just hit refresh and see if it's showing up? Hit refresh.
There's a refresh button on on on the right hand side of it. It's not showing up. Uh that's that's weird case. U first
thing first, you have to uh ensure that you are in the same region where your target group was created. Okay, your
region should be the same. Now if you already have created the target group and it's not showing up then the only
thing that you have to do is that you have to refresh the complete page which means that you're trying to configure
the rober right you're trying to configure the rober you have to basically it's not showing
up you have to refresh the complete page now once you refresh once you refresh the complete page you will lose the
confusion that you have done so far and you have to start from beginning but before you start from beginning make
sure the target groups shows up. Okay, you will lose the configuration of the
rober because the entire page is refreshed. Scroll down and look for the target
group. If the target group is still not showing up, this means that it's not there.
Okay, still showing active. Should I create a new one? S I there's a troubleshooting step I asked you to to
do when you just give give me one response that once you just completely refresh the page right you just uh
refresh the page and if you scroll down can you see the target group right now yes or no
can you show the target group step again why there's a need to show it again you're talking about the the the low
balancer one I I'll just come to you the target group is not created Or there could be a reason that you have by
mistake switched to a different region. Create a new target group. In that case, there's a possibility either you didn't
create the target group or you didn't you you have switched to a different region by mistake. So San is telling me
to you asking me to uh show you the target group step again. Basically this is the listener routing step. Once you
go to listeners and routing, you have to go with protocol and port HTTP. Just keep it as it is. Then it says default
action forward to select a target group. You have to uh go to the drop-own menu and the target group shows up. You have
to select it. Once you select the so in the listener routing configuration you have to make sure that the protocol HTTP
is being selected and it forwards the request to the target group. So you have to select the tie group in the list and
once you do this thing thereafter you have to straight away click on create loop balancer. Okay
we're all set as of now everything is going perfectly fine based on what we planned. Now the next thing that comes
up is the we have to focus on the autoscaling. The next thing that we have to uh so we
have to focus on the the next part which is autoscaling. All right. Um hang on one second.
Based on the documentation we have to create a launch configuration which is step note 34. But because configuration
was there it's been deprecated. It's being discontinued from the dashboard. So I didn't have the privilege to change
the documentation because uh I just uh saw that it's been removed but that's okay. Uh we create a a launch
template instead of launch configuration. Launch template basically is the advanced version of it or
basically gives you some more advanced options. Now what is this launch template? this launch competition we we
are we are talking about see um when I discussed with you autoscaling I discussed with you few of
the important ingredients and the the main thing that I discussed with you I discuss this thing with you that when
the autoscaling has to initiate a scale out event what exactly is meant by scale out event
if if you use the term scale out event what do you mean by that so autoscaling if it it has to initiate a scale out
event it has to launch this Right? Scale out means launching. Scale in means terminating.
So autoscaling has to initiate a scale out event and it it it has the the task to launch instances.
The problem lies in this case is that how the autoscaling we understand that what should be the AMI the instance type
the key pair the security group the the storage linked with the instance. We basically put this uh all these
parameters in a template or document. The document is called launch configuration or the launch template.
Now this launch configuration is kind of deprecated now. It's no longer supported. This is deprecated.
So you not be able to see this thing showing up now. But now they have just removed that from the dashboard. very
unfortunate for us because it will be easier for easier for us to use to use this option but that's
okay launch template is also easy so launch template will consist of all the parameters which we which
autoscaling needs to deploy instances with exactly the same parameters
right what's autoscaling group autoscan group is the group of identical instances right so I gave the definition
An autoscan group is a logical collection of identical targets. What do you mean by identical? They have the
same AMI, same instance type, same key pair, same security group. So in this case to make sure that they have the
same identity, they have the same software and hardware configuration, same security group applied to them. You
have to put all the things in the template. Autoscaling will use this template and deploy identical instances.
Okay. So when you go back to the EC2 dashboard on the left hand side you will see a
bunch of options out there right. So under the instances we have 1 2 3 there's a there's a third option which
is called launch templates right this is this is the one launch templates just on the left hand side where where you have
the instances you have instances instance types and launch templates you have to click on launch templates
go to launch templates let me know once you're on this page just tab done once you're on this page
in this you can find option switch launch configuration But in my case it's not showing up. It
depends on the region uh region to region. You must be using some of the region apart from North Virginia. It's
it depends upon region to region. So if I go click on create launch template I'm not getting the option as of now.
But it could be the regions uh if use if you use a different region in that case sometimes options change.
Anyhow, we are on the launch templates. I click on create launch template. Fine.
Now, once we go to create launch template, the very first thing is that we have to put a template name. I name
it as for example HTTBT launch template.
Fine. And uh you can just put some description. You can just copy and paste the same in the description. That's
okay. Fine. And yeah, first thing first, add a label.
I'll go a little bit slow because this is not in the document. Put a template name. Put something in the description
or just copy and paste the same thing. Let me know when I'm done with this thing. Just type done. Assign a name and
a description. It's not there in the document because the document we have I've shown this
test to configure the launch configuration. That's why I'm going going a bit slow so
that no one lags behind. Done. Now once you uh perform these two things, once you just put a name and
description, there's something called autoscaling guidelines. It says select if you want if you intend you to use
this template with the easy to autoscaling, right? You have to check this option provide guidance to help me
to set up a template that I that I can use with autoscaling. Just check it right now.
Done. You have to check this option. Provide guidance to help me to set up a template so that I can use that with
auto scanning. Please check this option. Okay. So basically it will show you uh show us the the important options which
the autoscreen needs to initiate a scale out event or increase the number of instances or launch instances for me.
Right? Once I I'm able to perform these three things I just go down to the application and OS images. Fine. Now in
the application OS images you have to application OS images you have to choose owned by me. Okay.
And the drop-own menu you have to choose the one that you created earlier. This one. Go ahead and do that.
You're saying this image is owned by me. I own this image. I own this image. Fine. So once you uh once you uh create
the template the you also have to include the image. So basically there was there's a question out there uh
difference between image and the template. Image is the subset of template.
Image is a subset of a template. Template is a large overview of the things to be deployed. Right? And image
is a part of it. Right? So that's why create the image because the image has to be included in
the in your document in the launch template in the launch configuration so that the instances can can pick up this
image and deploy the instance with the same Apache PHP application upon upon it. Done. Are we done with this?
What's next? Next thing is test type. Um in this test types you select all generations
all generations and look for T2.micro.
Now I think T3.micro will also work but I think that's okay. Yeah look for T2.micro and select it. In your case if
T2 micro doesn't shows up then then choose T3.micro. Please select it as test type
T2.micro. Just in case T2.micro doesn't shows up, then you have to choose T3.micro. Please
go ahead and go ahead and do that. Can you see what exactly what we're trying to do? We give a checklist the
list of ingredients, the list of parameters that autoscaling will refer to and based on that it will deploy the
STS with exactly the same configuration. This is done. What's next? Keep pair login. In the key pair login, choose
don't include in the launch template. We don't have to go with the key pair. Make sure that you have the default one
selected. Don't include the launch in the launch template. We don't need the key here because we don't have we don't
need to SSH to the SS. There's no need for the keeper. Choose don't include in the launch template. Let me know with
this done. Okay. What's the next thing? You have to straight away go to security
group. Okay. Now we want to allow the HTTP traffic from anywhere. So what you can do is that you can go to select
exist security group and choose the same security group that you apply to the load balancer same one because resters
have received the HTTP traffic right so you select the same security group that you apply to the load balancer same one
please go ahead and do that just done with this if you wish you can create a new one but
just just to save our time because we just have to include the inbound HTTP traffic from all the sources. The same
inbound rule that we applied to the low band ser so we're going to be using the same security group because it has the
same inbound rule done. Okay. What's next? The next thing that you're supposed to do is you scroll down. Don't
go for storage volumes. Go don't go for tag. Um this is this is purely optional. Don't go for advanced details. Okay. We
don't have to configure any of the advanced features as of now. No, this is not required. Okay,
you don't have to configure anything. So, what we have done, we have put a name. Uh the main thing is that we
choose the image. We choose a test type for the keep pair. We went without the keep pair and we
apply the security group which which allows the HTTP inbound from all the sources.
Fine. We have just made these selections. We could have gone to some more details
but right right now these things that that we selected are more than sufficient for us. So rest of the
parameters will be taken by default. Okay, we're not configuring the storage resource tax advanced details. You will
pick the default values for them. Click on create launch template and your launch template gets created and then go
to uh view launch templates and your template it's created. Let me know once you are
on the this launch templates page. Just tab it done once you're on this launch templates page.
Okay. So right now you have put a list of parameters for the instances to be deployed. So the instances with exactly
the same properties, same AMI, same test type, same keeper, same group would be deployed automatically.
Now what's the next thing? The next thing that we're going to do is that we have to create an autoscaling
group from this template. Now the steps on the document will be exactly the same. It it
was only that the launch configuration has been deprecated. It's being removed. Now you can easily see the documentation
and follow the steps. I'm showing step number 38 and 39.
You need to go back uh and you have to select your template. Once you make a selection, there's an
actions on the top right hand side. Uh you have for example launch test from template. You can launch SS from it. You
can modify the template. Now when you when you modify the template, it creates a versions of the template. Version one,
version two, version three, version four. You can delete it. You you can delete the version of it. You can set a
default version. Default version means that let's suppose you have five versions out of five versions you want
the third version to be default one which means that whenever you deploy uh launch instance the third version is
used by default. So default version gets the priority. You can manage the tags. Uh okay, let's go to the the this option
create autoscaling group. This is the second last option you have because from this template we want to
deploy an autoscaling group. What's an autoscaling group? It's a group of identical stresses. You click on create
autoscaling group and you come to this page. The statement done. Once you're done, you have to go to the actions. You
have to choose the launch template. Go to actions at the top right hand corner. The second last option shows up create
autoscan group. And this will take you to this page. Okay. But you have to basically choose the
template. Fine. You have to choose a template. go to actions
and you have to click on create autoscaling group. It's a it's an option. Okay, this is the launch
template. You select it. Then you go to actions. Within actions, you have the second last option create order. You
have to click on that. You just think that's what I said sir. But this option is only visible to only
to you reasonably because you already have created all the steps. You already done all the steps. That's why the
option is visible to you. Perform the steps. Launch confusion was there. You created the autoscaling group. And once
you go to the autoscaling group, you get that option by default, which is not a privilege for all of us. You understand
the point? So to view this launch configuration, we have to create the launch template first. Now in your case,
the launch configuration is already created. The orders group already exist. That's why the option is showing up for
you. Let them complete the steps because see one thing I can tell you from my
experience. It's not a step it is important. These steps are very very easy. If if I if I
give you this document to perform the steps on your own in the document, you can easily perform the steps. If you
perform the steps by yourself, it doesn't means that you're a champion. I'm talking about all of all of us. It's
not the steps which are important. It's the concepts behind the steps which are important.
when you go for the interviews, when you go for evaluations, they will look into your conceptual overview. Steps you can
you can just Google and you can all you see the steps everywhere. Okay, steps are not that much important. But the the
concept the conceptual in-depth concept behind those steps that's what make the difference between the person who just
know the steps and the person who knows the concept behind the steps. So focus on the concepts. Don't focus on the
steps too much. And no one is awarded if he if someone performs the steps way earlier before other people. That's why
I always recommend don't go too fast. Be with other people and focus on the concepts. All right. So I just named the
autosk group as httpd asg. Name the autoscan group. Right. So you have to name it. Fine. Now once you
name the autoscaling group, it automatically shows you shows you the launch template details. Fine. You don't
have to change it. You don't have to change uh these details because we are picking the this launch template to
create the autoscan group from. And once you name it, click on next. Let me know once you're on this page. Basically you
have to assign a name to the autoscan group in the in down below in this section. It shows you the details of the
launch template that you have selected. Of course you don't have to do any modification over here right and uh you
have to just click on next and this will just take you to the uh this page next thing. All right once you click on next
uh the next thing that you have to do is just step number 40. Okay. Uh in step number 40 what you have to do is in step
number 40 what you have to do is you have to choose there's something called availability zones and subnets you have
to choose all the availability zones check all the boxes these are the same boxes we checked in the robass
configuration I'm saying autoscaling go ahead and try to deploy instances in all the
please go and do that you don't I have to change the VPC. We are staying the same virtual network.
Basically, you have to select all the availability zones in our list. Let me know what's done. What's next? You have
to click on next. Don't go with the instance type requirements. Click on next. Basically, you can override uh
test types, but we're going with the T2.mic. We don't have to change it. Click on next. And now you come to this
configure advanced options. the stamp and done. Once you're on on this page, once you come on this page, now it is
asking you, do you want to include the load balancer in this entire setup?
Would you like to do that? You have to choose the middle option which says attach an existing load balancer.
Okay. Then it says what's the target group? You have to select the target group of the load balancer. Go ahead and
make the selection. Choose the the the option in between attach an existing loanser and then make the selection. Go
ahead and do that. Basically, we are trying to involve the uh our robasser in between. We're trying
to make sure that the basser can be uh included in between these options or in this entire setup. Please do that.
Done. What's next? uh VPC latest integration options is is a is a use this option a new option we
don't have to use it we need to go to health checks okay and you have to check this thing
turn on the elastic loop as a health checks right you have to check it because we want to depend on the loop as
a health check in this case and the health check grace period is 300 seconds don't change it what is grace period See
uh grace build means whenever the new test gets launched autoscaling will launch new stresses. Let's suppose
there's a there's a high demand and autoscaling has launched 10 stresses. Now those 10 stes are just been
launched. They're still initializing. We want the robuster to give those newly launched stresses at least 5 minutes so
that they they are in running state. See whenever you whenever you launch an instance you must have seen this thing.
The instance goes to initial in initializing phase. In that init initializing phase the the instance
boots up in it installs operating system services applications upon it script software and once uh everything runs
perfectly fine it shows you running state. Now the problem is when the instance is
initializing and if I allow the load balancer to start hell checks towards that initializing instance it will fail
the health check because instance is still is still booting up. It is not in running state.
So before it comes to running state if load balancer starts sending the health check towards that instance which will
still initializing in that case uh the instance will not respond with 200 okay message and by mistake or mistakenly the
load balancer will declare the instance as unfit or unhealthy. Fine. So to prevent user from those situation you
say low banser whenever the instance gets launched give it a buffer time of 5 minutes 300 seconds is equal to 5
minutes only right let that instance launch for 5 minutes be in running state let it completely launch and then once
it's completely launched it's it's ready then you start sending the health check towards it so it takes the first 5
minutes as the grace period You can see that there's a there's a clear definition over here. Health check
race period. This is the time period delays the first health check to until instances finish initializing.
So this is a time gap between the instance just launched it is initializing and between the first
health check. It does not prevent instance from terminating when placed in a non-running state. Okay. So if the so
basically the instance is still initializing and uh the low bass is sending the health check towards it it
will fail the health check. So to prevent that from from failure you give a buffer time of 300 seconds. Okay just
an FYI. Now you have to make sure that you check this option and ensure that the grace period is is 300 seconds.
Don't change this value. Let me know done with this. Let's tab and done was it done? Let me know was it done. Let me
know done with this. What's next? You don't have to configure additional settings.
Click on next. Now I'm going to show you uh the next step which is step number 42.
Now we have to configure the the main thing within autoscaling is the group size and the scaling policy. What is
this? What is group size? Group size means that what's the minimum and maximum number of instances you want. I
coined a term in front of you. Uh coined a term that's capacity. What's capacity? Capacity is equal to number of
instances. If you remember, I discussed this thing with you. Capacity equates to number
of targets or instances. The same thing number of targets. Capacity equates to number of targets.
Okay. So capacity or size is the same thing. Now it's saying that what is your desired capacity? What's the minimum
capacity and what's the maximum capacity? Desired and minimum is is is the same thing. I'm not sure why they
have divide these two different things as as different but generally they're same thing. Desired and minimum capacity
means that what's the bare minimum instances you want to be running. It never goes below that. For example, uh
let's suppose I give a very lame example. It's getting hot in Delhi, right? So I saying that the the minimum
temperature I I'll need to set in my air conditioner is for example let's say 23°C
fine or 21°C and the maximum cooling I can I can go up to is 18°C so I can set the cooling
uh temperature or it's kind of a thermostat. So I'm saying in in our case the desired and minimum capacities too.
Now I'm I'm going for lesser value. Of course in a in a big environment where you're working production environment it
will scale up to 20 maybe 200 right so I'm saying that in normal situation when everything everything is
normal there's no high demand there's no high peak in traffic the number of requests are below a certain level I
need two instances to be running healthy instances running every time but let's suppose there's a high surge in
traffic more and more users are trying to access my website. Let's suppose generally uh in in an hour I have 10,000
visits 10,000 clicks uh towards my uh website but in that hour instead of 10,000 it's 100,000 clicks going on. So
this case I'm saying I want to increase to the maximum capacity which is four. Fine maximum capacity of four.
So in this case what you do is that you set the the minimum threshold and the maximum threshold. It never goes goes
below two and never goes above four. Um we have to set the maximum capacity. See there's there's no concept called
unlimited capacity. You have to set the maximum threshold. Fortunately this can be changed
afterwards. Let's suppose you put it four uh after sometime I say I realize suppose it's a
less number. If I put it I can change it to eight or 10. I can change these values afterwards. But you have to set
the maximum capacity. There's no thing called unlimited capacity. At the end we are consuming physical
resources at the back end. So in this case Amazon web services want to ensure that you you set the the capacity which
is beyond a certain range which is which is within some limits. Fine. So we're going with with lesser values but in in
production environment we go for higher values. Fine. Okay. Just do do that. Go ahead and uh put these numbers. Let
everyone know what's done with this. Okay. What's next? Scaling policy or the conditions. See
basically the thing is that autoscaling we we have given given it a buffer value minimum capacity to maximum four. But
the question really is that what are the conditions which will enforce autoscaling to to increase or decrease
number of instances. Those are the scaling policies or the conditions. For example, I choose this condition lower
balancer request count for instance. In this case, the users are uh my end users are my end users are trying to access my
website. They send a huge amount of traffic to the load balancer. I can set a I can set a value that if if low
basser is sending more than 100 request per target on an average every target or every instance is getting more than 100
request from the load balancer. Then in that case initiate a scale out event and launch instances right and once the
request goes below 100 then it will start decreasing or terminating those statuses if the requests go down. But
let right now I'm saying if the requests go at 100 or above 100 autoscale can
increase the number of instances and eventually if the requests go down it will automatically decrease them one by
one. So in this case what I will do is that I'll go with the scaling policy. Choose target tracking scaling policy
and choose the metric. Now we have the CPU utilization, number of packets coming in, number of uh bytes going out.
We're going to be choosing application low pass request count per target. Choose the low pass target group and set
the target value of two only. Now, of course, you should have in hundreds. By default, it's 50. It should be 100 or
1,000. Right? But because we want to we're just doing testing. We want to see the fast
result. We put the target value as two. Fine. So I'm saying if low bass is sending more than two request to each of
my instances this means that there's a high volume of traffic coming in and now autoscaling has to increase the number
of instances but once the number of requests go below two that will start decreasing to minimum size. So I'm
saying I start with two instances on an average each of these two instances get more than two request
go up to four instances add two more and once the number of request on an average goes go below two it will come back to
the minimum capacity right we're giving a bandwidth this is minimum this is maximum if the number of
request per instance is above two go to the maximum if the number of requests per instance test goes less than you
have to go back to minimum. Okay. And we are giving 300 seconds for the instance to warm up so that they get get
enough time to boot themselves up and be a part of this autoscale group. Right? So basically you have to choose a target
tracking policy. Make sure that you choose the application request count per target. Choose a low pass target group.
Put a low target value of two so that you can get a fast result right now and choose 300 seconds for this test to warm
up which means then when the new test get launched we give give them enough time to install everything and be in
running state. Let me know once once you're done with this thing you don't have to check this option
disable scaling to create scale out policy. This is not uh uh we want I'm saying disable scaling means disable
termination which we never do. Okay. So I'm saying that you only increase don't decrease.
There's an option but generally we don't uh check this option. There's no target group option for me. Option is not
there. You might be looking somewhere else. You have to choose a metric type application load balancer request count
per target. You have to make sure that it chooses the exact metric type. You must be choosing some some different
metric type. That's the reason because the metric type only shows up if you choose the application load balancer
request count per target metric type. Then the target will show up. Done. But because we're doing testing,
we're going for low lower value. In in a production environment, you're going with with with hundreds 200 maybe 2,000.
Right? So that's what you have to do. Don't check this thing disable scale in um only scale out policy. Don't check
this option enable scaling protection. If you choose any one of these two options, you prevent this test to be
terminated. We want the termination to happen if the amount of requests go down. If the
target value is goes less than two, we want the instances to be terminated. the addition instances to be wiped out and
get back to the normal size of two instances. So you don't have to check these two boxes. Okay, now you don't
have to do anything else. Next, next and create. Okay, rest of the options are useless. Click on next. Click on next.
Click on next and click on create. So three next and create. Two to three next. Click next. thrice and thrice or
twice. I'm not sure. Yeah, it was twice or thrice and click create. Basically, those are um optional tagging and uh
notifications. Click next. Next, next and create. Let me know what you're done with this. Next, next create. Fail to
create one scaling policy. I'm not sure why I didn't get that that option. Okay. What you guys can do is
What you can do, guys? What you going to do is you click on the autoscan group, right? You you click on autoscan group.
You have some options on the top details activity automatic scaling, right? You have to go to automatic scaling. Let me
just go to automatic scaling. Click on the autoscaling group. Once you go to automatic scaling,
once you go to automatic scaling, is this target tracking policy showing up under dynamic scaling policies?
No, showing not showing up. Ga, I'll show you one more time. Guys, you have to be a bit faster now. Uh I'm
not sure why all of you face this the same trouble at the same time. Your autoscan group is created,
right? You just what you did right now. You you created an autoscan group and you got this error message failed to
create the policy. We have to troubleshoot it. To help you troubleshoot, you have to click right.
There's a click option. You click on the autoscan group. Once you click on this autoscan group,
you have all these options on the top details activity. You have to go to the third option which
is automatic scaling. Okay, automatic scaling. You have to click on that op that option and it will just take you to
the automatic scalings page. You have to verify whether or not this target tracking policy showing up under
automatic scaling or not. If it's not showing you, then I'll help you troubleshoot how to create the policy.
It is not. Okay. Not showing. Okay. So, how to uh how to create it again? I'll show you. I'll remove the policy at my
end. I'll remove the policy and show you actions. Delete. Okay. So I also deleted my policy. I don't have anything.
There's something called create dynamic scaling policy. Right. Click on that. Click create
dynamic scaling policy. Let me know what's done with this thing. Click on this option. Now click create dynamic
scaling policy. Let me know on this page. Click create dynamic scaling policy. Now you choose
metric type as application low baser request count per target group as the uh the low baser group and
the target value as two and instance warm up period 300 seconds. The same options you have to choose the metric
same uh same low baser group target value is two and it's a small period as 30 seconds.
Okay, basically you got the that option failed to create the policy. This policy was not created,
right? The same thing that you have to configure. Are we all done with this thing? You
have to click on create. That's it. Once you click on create, the policy gets created. The message is gone now. Okay.
Fine. Now on the left hand side of the automatic scaling, there's there's something called activity. Go to
activity and you see two instances being showing up. Go to activity or you can go to activity
or go to stress management. You will see two instances being launched. So this is basically got we got an
opportunity to troubleshoot. If you get this message go to automated scaling and create the policy again. Okay. All
right. What's next? Uh the next thing we have we have to do is testing.
Um okay let's do let's do this thing you have to go to the step number
51 and 52. Now once you go to the uh you have
you're on the autoscaling page you can see that there three vertical lines on the left hand side.
On the left hand side you have to go to the load balances page. Okay.
And there's a DNS name of the load balancer showing up. Okay, you go on the left hand side
with the load balancer and with the low bases there there's a DNS name showing up. You have to copy it. There's there's
a copy button, right? So you have to copy it right now. Just just copy it and you paste that. You don't just try
to access the website. You try to access the website. So um I just open let's just paste it press
enter I'll be able to see the page now basically load balancer is taking my request and sending it to one of the two
instances running at the back end. Now what you have to do is that you have to increase the number of request it
will take 5 minutes for you to get the output. Make some refreshes do refresh 30 40 times. Now refresh uh
there's a refresh button. Let me just show you that what if I talk talk about refresh what do you mean by that? Uh
either just click on enter or this refresh button right. So either you just click on e either you just uh go to the
menu bar press enter or just keep on hitting refresh. 502 bad gateway means that uh there must be some issue with
the security group or must be issue with the uh with the statsis deployment uh which means that or the health checks in
that case guys please hit refresh u 30 40 times uh the DNS name of the of the loans I
need to see that okay now guys once you've made 30 to 40 clicks you go back to the low autoscan group on
the left hand side. Click on the autoscan group. Go to activity. Now there's a refresh button.
Can you see that? You go to autoscaling groups. You click on autoscan group. Go to activity and
there's a refresh button. I just make a a connection between them. You click on autoscan group. Go to autoscreen group.
Go to activity. Hit refresh button and now you will see that your two instances will be launched.
Right? Can you see that? Just try it out. This has been done. We're getting the result guys. You have to check your
autoscan group now. Right? You have to select the autoscan group. Select your autoscan group. Go to actions and click
on delete. You have to select your autoscan group. Go to actions. Click on delete. and it says autoscaling group
consist of running a status just type in delete and click on delete and remove the autoscaling group and guys if you
have if you want to repeat these steps in your in your 3 days ahead of you uh you don't have to remove the AMI the
first two steps you can skip them okay guys please remove the autoscan group let me know what's done with this okay
initiated now you have to go to load balancers go to load balancer check the load
balancer You have to select the load balancer actions and delete load balancer. Now
even if you don't delete the load balancer, you still be in the inf. But if you don't need this anymore, we
don't need this anymore. Now delete the load balancer. Just tap and confirm and you your load balancer also needs to be
removed. Please go and do that. You basically have to delete the load balancer
and autoscaling group. Autoscaling group uh deletion will help you to terminate the instances.
Okay. Now what what are what are the rest things? Rest things are free of cost regarding AMI. AMI is you have
already paid the amount of uh basically in the month of you can use it till 30th of June without paying further for it.
This image can be used till 30th of June without paying anything extra because you already have paid 10 to 15 cents for
it for the month of June. Till 30th of June you can still use it without paying further. Not able to delete the
autoscing group any message you're getting any error you're getting back. You have to save
the autosc group actions and delete it. basically have to uh choose the image go to actions click on dregister AMI
and then click on dregister AMI let the autoscaling do its work you have initiated the process you you you don't
have to do anything at all you have to dregister the AMI once you dregister the AMI then go to the snapshots under
elastic block store choose the snapshot go to actions and click on delete snapshot
Fine. So once you choose the this Dre register AMI under AMI, you have to delete the snapshot of the image.
Money will be charged extra. You don't have to remove the secret groups. What's what's there into into
deleting the security group any error message you're getting? Hit refresh. Basic computer skills. If your page gets
stuck, hit refresh your browser. Go to autoscreen group again. Select it. Actions and delete it. Choose your auto
go to autoscan group again see basically I'm not sure why people are so much in in in terrible confusion you have
initiated the autoscan group deletion it is there right it is removing there's a cycle going on let it be right machines
don't work as human beings there they don't have too much of curiosity you have initiated the process it is running
it it will remove itself automatically basically the back end It's terminating the SSS. Have some patience. You don't
have to sit in front of a computer and wait for it to remove automatically. What you need to do is that you need to
understand that everything is being given to you in forms of access and the form of access you need to ensure that
whenever you access any account which is being signed up using a credit or debit card that's called the root account.
Okay. What's a root account? Root account means that it's a master account and everything that you access on AWS
can be easily accessed without any issues and you don't have any any type of restrictions as such. You can easily
perform the operations. You can deploy any of the services and in the root account you have all the access. Okay.
Now you must say that but ours are free to account. Yes, free tier account is also a root account. It's called root or
the master account. So just give me a few seconds as try to just explain this thing to you by just
drawing down some important points but what happens is that when whenever you start working in the organization you
don't have the root account to be accessed instead you will be using or you'll be considered as one of the IM
users let's start with the with the basic concept first before you can start clicking and navigating the items first
of all let's discuss the the theory be behind IM why you need the IM okay because without theical foundation
clicking the things and navigating the things will not make any sense to you so let's first of all understand the
relevance so the relevance of IM is IM stand for identity access management let's
understand what why this term is used I am or identity access management whenever you start working in the
organization you are being identified as resources. So there are identities that you that you have in the organizations.
For example, you must be working as a developer or you working as a manager, maybe a team lead, a quality analyst,
a database administrator. You have some identities that you establish by yourself in your
organization. Right? So in this case for example there's a company named as Vodafone or
let's suppose there's a company named as JP Morgan let's suppose this this organization uh
XYZ JP Morgan uh Vodafone the company wants to sign up to AWS the very first step that they will be taking would be
the first step that they'll take is that they will they will in in order to sign up to AWS they'll be creating something
called root accounts. Root account or root account. Any organization that wants to get started, any organization
that wants to get started will always ensure that it will it will sign up to AWS and in that case it will
be using the root account to sign up. Even the free tier accounts that that you're using, even the free tier
accounts that you're using right now, the free tier account, the free tier account that you're using
right now, it's also a root account. It is also a root account, right? So root account is is also called a master
account. You als you can also call as a master account, but AWS we use the term root account for that one.
All right. So any company gets started will sign up for a root account. Now technically or I would say from the
business point of view the root account is never supposed to do any deployment implementation or any
type of configuration on AWS. What I may say mean to say that root account root account
is not supposed to be used
for deployments for deployment of resources for deployment of as resources
you're using the root account for deployment But from the business point of view, a
root account is not supposed to do any type of deployment of AWS resources. It's strict. No, no. Now you say that
okay, we we're able to do that from our side. Our free account is also root account. We're doing this thing because
we're doing doing this thing purely for hands-on and for testing purpose. You're not using a free account as a part of a
business. Right? Whenever these companies start working these these companies work in a very process
oriented fashion, right? They work in a very processoriented way. Okay? So in this case what happens in this case is
that these root accounts are being kept separately. They're being preserved just
for signing up to AWS and they're not used for any type of deployment of enter resources. Okay. They've been locked or
I would say they have been kept separate. They're not supposed to deploy instances, load balances, autoscaling
groups, EFS, FSX, global accelerator. They're not supposed to perform any type of deployment.
So the question released is who does that? Basically the second step is that the root account will create identities.
Okay, create identities. What are the identities? What do you mean by identity? For example, let's
imagine there's a person out there in your organization. His name is Steve. Steve is an IM user.
Okay, Steve is an IM user. Now, Steve wants that um so Steve is for example, okay, I'll I'll put user afterwards.
Let's suppose he's a software developer. Okay, he's being hired by JP Morgan and uh Steve is working as a developer and
he develops applications, cloud applications for JP Morgan. Now Steve is a developer. What is this?
This is a this is an identity. What is this? This is an identity. Fine. Now when you identify Steve as a
developer, you declare it as an AM user.
Okay, it's an identity. Steve is a developer. It's an identity. You create it and this is used as an IM user.
Now what it do is that Steve as an IM user will log in to the root account.
Right. So the root he will lo into the root account and perform all the deployments. So basically Steve
should be doing all the deployments or an IM user should be doing all the all deployments
all configuration etc etc. So an IM user should be able to per should be technically should be able
to perform these things. So I im user will log into the root account saying okay I'll be doing all the deployments
and configuration so I'll log into the root account. Root account will make all the
provisioning for this user in two ways. Number one root account will assign credentials or allocate credentials to
this user. Credentials means login ID and password. And number two, above allocating the
credentials, we'll also manage permissions. Now, allocating credentials and managing
permissions come under the category of access management. Okay, that's the reason IM stand for
identity and access management. What you do is that you create the identities.
Steve is a developer. Uh Kunal is a database administrator. Prashant is a Linux administrator.
This person is a Python developer. This these are identities. These identities are called IM users.
These IM users log to the root account and the root account will allocate the credentials to them and manage their
permissions. this this these two things come under the category of access management.
That's the reason we use this service or term this service as identity and access management. You identify the IM users in
your in your organization and you give them access or you manage their access. This is called identity
access management. That's why it's called service identity access management. Right now I give you a very
common I I'll give you a simple analogy. Let's imagine um there's a there's a a
businessman who opens a factory. He's a he has a large uh gigafactory where 10,000 workers can easily work, right?
So the owner of the factory will buy land, right? Lease out the land, set up the machines, set up the entire
infrastructure. But who will do the labor? those 10,000 laborers. Let's suppose Elon Musk uh is involved in uh
or is a CEO of Tesla, right? So he manages the the all the operations of Tesla as a CEO.
But who works on assembly line? It's a it's the uh the workers who work in assembly line and they they are um they
are responsible of manufacturing Tesla cars. Right? So this means that the the hiring management or CEO chief
chief uh staff or executives they take care of the entire management of the operations but it's the the actual
people the actual um labor or I would say u software engineers quality analysts who do the job the same concept
applies over here JP Morgan creates a root account just to be on AWS root account can be created by your uh
your uh your vice president or chief technical officer. So your company's credit card will be used, debit card
will be used to sign up for the root account. Now the question really is who will do the deployments?
Linux administstation, database administration, uh Windows administration, development, software
development, quality analyst, u solutions architecture, being a solution architect, uh system administration,
DevOps, machine learning models. Who will do those things on cloud? It's the IM users, the people from IT and
software background. So JP Mong for example has 10,000 IT software people and all of them are migrated to AWS so
that they the let's suppose JP Morgan's entire infrastructure is on AWS. It has 10,000 employees in the form of
IT software uh teams. So that there are 10,000 employees in in uh IT software departments. Those 10,000 applies are
developers, quality analysts, Linux administrators, database administrators, window administrators, resource
architects and so forth and they'll be using AWS to perform day-to-day job. So they'll be signing
into the root account perform all the job for the root account and root account will say okay I'll give you some
permissions and I manage your access. So that's why it's called identity and access management. create identities on
IM and you give you manage your access. I think this is clear why we use the term identity and access management. So
this means that once you start working organizations you don't have access you will not get access to the root account
as you're getting in a free account. Basically you'll be an IM user and being a cloud architect you your your uh
responsibilities include designing provisioning and and implementation of resources on AWS. So you'll be
designated as an IM user. Your company will have a certain root account. So you'll part of a root account and you'll
be signing up to the root account through the credentials given to you and you'll be doing your day-to-day job.
Okay, this is the what the concept is. Now show you exactly that how this this uh thing is put into perspective. First
thing first, how can you differentiate between a root account and an IM account? Let's start with the with the
root account first. If you navigate to the top right hand corner, you will see there's a there's a special account ID.
It's a 12digit uh unique ID will be given to you. This 12digit unique ID is is being offered to you. I would say
this is uh this is a root account ID. Every root account, if you just go to to the top right hand corner and you just
navigate, you will see that there's a root account ID, right? It's a 12digit root account ID.
So every root account will have a unique ID given to that specific account. That root account is used that root account
ID is is a special identifier. Okay. AWS will identify or differentiate your account from other accounts using your
root account ID. Okay. This account ID is not given to an IM user. Let me show you that. For example, I
just go to IM. Let's let's go to IM right now. And I uh just go to the IM dashboard.
Now, one thing if you go to IM dashboard, it does not require region selection. If you see the top right
corner, it shows global right now. Why it is global service? Because IM doesn't differentiates between regions.
when you deploy instances low balances autoscreen groups and to the databases you have to choose the regions but in
terms of IM it's not region specific because it is purely account management system or service once I go to IM I can
just go to users on the left hand side and I think I I may have a bunch of users out there let me just uh navigate
over there okay let me just go ahead and delete this users.
Okay. And I just go ahead and create a new user right now. So this is my root account. This is my
root account. And from the business point of view, the root account is not supposed to do any deployments. It's an
IM user that that will do that. You can see the definition. An IM user is an identity with long-term credentials
that's used to interact with AWS in an account. So I click on add users and for example
I name the first user as Steve and I set a custom. So Steve is the user ID. I can for example uh the name of the
employees Steve Jobs. I can take the initial of the first name S and the last name S jobs or for example Roan Aurora
type in R A R O R A. I can just type in my first name or the first name of the user as a user ID. So
Steve is a username that he that Steve will be using to sign in as an IM user. Fine. And I for example generate
a custom password for this user. Fine. I just go ahead and and create this user, right? I'm just going I'm not
going through the details right now, but I'll show you this this thing. It says user created successfully. You can view
and download the user's password and email instructions for signing into the AWS management console. This is the
console signin URL which I'll be giving you to the user. Okay. Now, if you see the format of this URL,
this is how it looks. https col2 forward slashes and this is the root account id dot this is the root account id dots
signinwamazon.com/conole right so basically this is the URL that I'll be giving to the IM user Steve with
the password set for him now let me show you this thing I just copy this URL and just switch to a different browser um I
just go to Google Chrome right now. Fine. Now I just go ahead and paste this in the in the browser.
Fine. So it automatically picks the account ID of the root account. Can you see that username Steve and I set the
password. Click on sign in. Now if the discriminations are accepted, Steve will be able to to sign in without any
issues. Fine. And at the top right hand corner you can see that it says account ID root account ID and this is the IM
user Steve which means that the IM user Steve is an identity under the root account ID and Steve can perform the
deployments based on what account ID gives what this root account gives to this uh this user. So the root account
uh root account can allows Steve to access certain services like EC2 load balancer autoscaling
maybe for example RDS that we'll discuss afterwards how to do uh give access to these users but as of now I've set
credentials for this user so that this user can sign in to the root account and can can perform day-to-day operations.
Fine. Now the issue out there in this case is that Steve is able to sign in as a as a IM user but once Steve tries to
get into any of the services he'll be automatically blocked from
accessing any of the services. So why is this case? Why this the why uh Steve because I've already allowed Steve
to log in but what exactly the problem is what I've missed in this case as a root account owner what I might have
missed because of which Steve is not able to access any of the services for example Steve's goes to S3 and uh let's
see what what type of admin you just try to create a bucket right now okay let's go to for example some
other service right there if I just go to buckets right now you don't have any permissions.
So I've done half of the job as a root account owner. I've just given Steve access to login to my root account which
means that I've managed the credentials for this this person but I didn't give the access to this person. Right? So as
an IM user you're able to login via the user ID and password that root user gives to you. Plus the root user also
has to give you access to certain services like EC2, S3, RDS, lower batch of autoscaling so that you can use those
services. So the type and level of access is also given is also provided by an IM us uh by
a root user for an IM user. So the root user generates credentials for an IM user and
also the root user is is responsible to give permissions to an IM user. Root account is a master account. Master will
not do anything. Basically the master will sit and give instructions to to an IM user saying do this thing. I give you
the permissions you have to perform all these operations on daily basis. So the access management in this case includes
giving credentials and also giving permissions. So the next thing that comes in this
into this picture is that if that's the case then how we manage the credentials what's the process to manage the
credentials the the the thing that we need to understand is to manage the credentials you have to make use of a
concept that's called IM policy. Let's try to understand the next stuff. Okay, let's let's discuss uh this thing.
IM is a web service that helps you you to securely control access to to the resources. You use IM to control who is
authenticated, signed in, and authorized and has permissions to use your resources.
So, IM is a basically it is an account management service which gives you access to to creating the IM users who
can sign in and you give them permissions or authorization. So this is simple basic basically overview that how
you manage the permissions in a day-to-day business environment. For example, you have a you have a central
server running. This servers run different applications. For example, a data scientist
will only be given access to a data analytics dashboard. A web developer will only access or get access to the
web development tools the dashboard. An administrator generally gets all the access.
So there's a server running a central server running with different servers and applications running upon it.
Data analytics tools are given to data scientist right. So he can only access those data analytics dashboard or tools.
A web developer can only access the web developer development tools and services. An administrator can only
access or all the services on on that server. Generally in a in a business environment an admin has almost all the
access. Okay. Because they have to do certain things which requires the complete access to be given to them.
Fine. So this is how we do the access management in an organization. Now let's discuss what's an IM user. It represents
an entity that's created in AWS. It can be a person or a service but it's a person only. There are no permissions
given by default. Nothing is allowed. You have to explicitly give permissions to the user. You have to give permission
to the user. Fine. What's what's what are the type of access? We'll discuss later on. But we need to understand this
thing that how you assign the permissions. The permissions are assigned using a concept that's
called IM policies. So once you create the user and let's imagine this is a this is an
IM user. You perform two things. One is authentication. Authentication means
that you set the user and password for the user and the second thing is authorization.
Authorization means that what type of things he is authorized to do or she is authorized to do. The authorization is
done using AM policies which means that what type of access this person gets is been defined or
decided by IM policies. Now what are the IM policies? The IM policy basically
it's a JSON document. JSON stand for Java script object notation. It's a JSON document.
Which helps you define which helps
you define permissions for
IM entities. So JSON document it an IM policy is a JSON document which
helps you define permissions for IM entities entities mean users groups we discuss those things. So an IM policy is
a JSON document. I'll show you exactly that how it looks like. So basically through the policies you define the
permissions for the users. If you don't assign any policy to to an IM user that IM user will not get any
permission whatsoever. So the thing that I missed in this example that I showed you where I
just allow Steve to log in, I missed upon usage of IM policies. If I go back to my root account, let's
let's go back to the root account. Okay, if I just go back to my root account, I click on return to user list
and click on continue. So right now this Steve is showing up as my user. I click on Steve's usern name and it
says permissions permissions no resources to display the thing that I'm talking about is
this section. Now if I click on add policies or add permissions it says add
permissions and I choose the last option attach policies directly. Okay, these are the
policies I was talking about. These are the IM policies that you can use to define permissions for the IM
users that you create on the root account. So the root account owner will allow the IM users to perform certain
operations through these permission policies. Okay, for example, I'll explain you this stuff. Try to
understand the anatomy of this JSON document. I go to administrator access. Now this
is this is a very special type of access I can I can only give to very few people because administrator access gives the
entire access to an IM user. Let's analyze it. We'll discuss some more examples but let's let's discuss that
thing. N access provides full access to the services and resources. Now this is the JSON document that that
is there. Okay, let's try to understand that. So, right now I'm going to discuss with
you the anatomy of the JSON document. Okay. So,
in terms of anatomy of JSON document, let's try to understand this thing. Anatomy of JSON document means that we
need to understand exactly what each line of this document means. Number one is version.
Version means that it's a timestamp when this policy was created. You can also put version one, version two, version
three, version four. But generally version is the is the time stamp when this document document was created.
So in in this case the very first thing is the version right version is the the date
of the policy generation when this policy was generated. Okay
fine that the first thing is is called version. What is the second thing? The second
thing that you can see on this document is statement.
What's a statement? Statement is basically statement is is it is a statement. It is
the it defines permissions.
It specify permissions. So a statement is is what specify permission or permissions
and in each statement you will see that we have something called first thing called effect. What do you mean by
effect? Effect means that what type of effect this permission will make. There are two types of effect
allow or deny. Do you want to allow access to certain services or you want to deny access to
certain services? One is allow, other one is deny. Effect is allow and deny. Fine.
What's the next thing? Action. Action means that what type of action means that? So action means what
type of action? Let's suppose I give you some examples. For example, um launch inst is an action is an
action. Launch instance is an action. Delete load balancer is an action.
Terminate instance is an action. What type of actions you would like to
perform? Do you want to launch something? Do you want to delete something? Okay. What what are the type
of actions you want to want to define? Now, in this case, there's a wild connotation. The ashion is a star
estrix. What do you mean by estrix? If you see estrix anywhere in any document,
this this means all. Everything or any. Okay. And asterisk as an action means all
everything or any. Fine. What's the next thing? The fifth one
that you can see in the list is resource. Resource
right now resource means which resources you're taking into this into this permissions. For example, easy to
instance is a resource. Application load balancer is a resource. What are these? These are resources.
Launch template is a resource. Okay. What are these? These are resources. What are these? These are
resources. AC resistance application balancer, right? Autoscan
groups. What are these? These are resources. What are these? These are resources.
So resources means that which of the resources you want to include in this permission. For example, let's imagine
if resource is an EC2 instance, then the corresponding actions could be
for example launch instance, stop instance or terminate instance. These are the corresponding actions for
our resource. So once you define the actions then you have to define the resources on which those actions will be
performed. Okay. Now in this case if I go back to the this
access policy the resource is a star. Okay. So what do we mean by star? Star or asterisk is a wild connotation. This
means that all resources all resources fine this is the perfect anatomy of JSON
document you have the version statement effect action resource so in this case if I just go back to the this
policy this is the time stamp statement is a is a is a kind of a you can say that it's a description of the the
permissions you want to specify effect is allow you can allow or deny So I want to allow something action all the
actions resource on all the resources through this policy. I want to allow all the access to all the resources. That's
why the administrator access gives full access to the services and resources. Is this is this policy clear?
Any questions or doubts you have? Now if I assign this policy to Steve, Steve will be able to perform all the actions.
So in this case for example I choose M for access and I just go ahead and click on next.
So in this case what I'm doing is that I'm saying this username is Steve. This person has the administrator access. I
just go ahead and click on add permissions. Right. So Steve has this policy assigned
to him. So this is okay let me show that what is say who is the IM user his name is Steve and B based on the permission
policies he has the administrator access this person has the admin administrator access fine so in this case if I go back
to Steve's account to a different browser previously he was getting this message
if I hit refresh you will see that the message would go away the message will go away. Okay,
so Steve has a complete access. He can deploy, delete, launch, terminate any of the resources. Let me just take one more
policy. For example, if I let's suppose go to the policy, let's suppose EC2 EC2 full access.
Okay, for example, I take this policy as a another example which is easy to full access this one.
Okay. So in this case this is the policy
EC2 is a complete dashboard right. It consist of load balancer autoscaling and cloudatch.
So in for example in EC2 full access what I've done is action EC2
star which means that all the actions under E2 are allowed resources all the resources under EC2 including for
example images AMI instance types key pairs security groups all the the resources even a security group is a
resource All the resources all the actions under EC2 are allowed.
Then I want to allow I want to allow access to all the load balancer resources. Load balancer has
two main resources target groups and um the load balancer itself action elastic load balancing estrix which means that
all the action under last balancer for all the resources underneath right let me show you that
it says for example I go with this one second statement effect allow right action all the action under elastic low
balancing you can create a load balancer you can delete it you can configure it you can modify it you can see there's
there's a there's a star over here resources all the resources under the load balancer load balancer has two
resources one is the target group other one is a load balancer itself okay I'll not discuss cloud because cloudatch we
haven't covered as of now let's come to autoscaling autoscaling for example Effect
allow action all the actions under autoscaling. I can delete an autoscaling
group launch an autoscaling group configure the scaling policy change their desired capacity resource all
resources under under autoscaling. Autoscaling has two resources launch templates and autoscaling groups.
Okay. And these are some other services or some other effects on what we love. Example, I want to create a service link
role which we apply to the instance. We'll discuss the rules afterwards. And these are some of the services that I
want to allow. For example, transit gateway, support of VPC, spot fleet. So in this case if you see the main part of
this thing under EC2 I have given access to almost all the EC2 resources instance images AMI instance type key builds
secure group I have given all the access to the loop master all the access to the cloudatch
and also the entire access to autoscaling. This means that under E2 dashboard which
means that if I just go to EC2 right now let me just just go to the EC2. If I just go to EC2 dashboard on the IM
account right because of this thing the user has
complete access to all the things on this dashboard instances elastic IPs load balances
snapshots volumes placement groups instances key pairs uh for example all the left and side options you can see
the user should have complete access in that policy I've also mentioned autoscan group cloudatch and loop passes as
separate services as well because they also come under the the EC2 service right when you when you access lubass
and autoscaling you have to go through through the EC2 only because they are the category of EC2 services
right for example I say allow all the actions to the load balancer resources I'm talking about all these resources
under load bassing low passes and target groups fine so in this case I'm giving complete access through the policy. It
is the easy to full access. All right. Basically, there's no hierarchy as such root user and then we have the IM users.
So, we don't we can't have an IM user under an IM user. We don't have that that that thing because there are only
two categories root user and IM user. So, in this case, what happens is that an IM user can have special privileges.
For example, administrator. administrator has is the the most powerful IM user, right? So, uh the
administrator is the the uh the IM I can give the administrator access to any user and he he becomes uh or he he or
she gets the complete access. Okay? Because right now the the things that I've showed you these are the default
policies. Default policies are the one which are there by default and you have to you just have to use them
and they are basically common for all of us. The next step we're going to be doing
will be groups. What is groups? What are groups? What are they meant for? What's the purpose
of behind them? And why do you need them? Okay, what is a group? By the way, group is a collection of IM users. Okay,
you group the people together based on their skill set. Now, why we group the people together? See, okay, let me take
one example over here. Let's imagine uh okay let's example that's an example of Intellipad.
Intellipad has created group of different batches. For example, there's one group of batch of AWS. There's a
second batch of Linux. Third batch of uh data science. Fourth batch of AI machine learning. Uh fifth batch of example uh
Python. Uh sixth batch of example let's imagine uh fullstack web development. So there are different batches created. Why
uh different batches are are been are grouped together for better management? So that we can ensure that uh the AWS
people can be put in one batch. The uh fullstack web developer development program people can be put in one batch
and those batches can be conducted separately. Similarly, think about large
organization in an organization where where you have hundreds or thousands of people
employees coming and working. For example, JP Morgan has 10,000 IM users, right? Managing permission of every user
would be very challenging. Okay. Let's suppose I I have 10,000 people in my team uh in in different
teams in in a in a big organization. uh some some people work as a software developers, some people work as quality
analysts, some people work as database administrators, some people work as Linux, Windows, uh some people work on
AI, machine learning, uh some people are the full stack web developers, they all are using AWS. So that 10,000 people I
have that say 10,000 is a very less number. If you see uh big IT firms, HCL, Infosys, uh TCS
uh to name the few, they have they have lacks of lacks of employees. Okay. And now we are very quickly moving
to to cloud computing. So they there there would be numerous organizations who have hundreds of thousands of people
uh working on AWS platform. So being an IM users and assigning the permission to every IM user is a very tough task
because the thing is that if you start doing the the permissions management or policy assignation to
every user one by one the chances are that will start making blunders and mistakes.
Okay. So which type of blunders and mistakes I'm talking about? For example, uh I
give more permissions to a person that he needs or lesser permissions that that the person needs
or maybe I give the administrator access to a person by mistake. See administrator access as I already told
you earlier gives the complete access only very handful of people in your organization maybe team leads uh vice
presidents of the engineering departments maybe there's a sele for the
administrators very few people will have the admin access if I accidentally give to to some
people they can uh terminate my instances and shut down my complete servers or my complete network fine so I
don't want to take any risk now for the for the perfection or for the better management of the permissions, I group
the user together in different categories. For example, I have one group for
admins, right? I group the the people together based on their job profile. I have a different group for software
developers. I have a different group for operations. I group the people together. People can be grouped based on skill
set. People can be grouped on based on the job profile. They can be grouped based on the respons roles and
responsibilities that I give to those users. U these people can be grouped based on uh projects they're working on.
Right now what what I do is that I start putting the people or IM users under these groups. So these admin one, admin
2, admin 3 that you can see right now. Uh what are these? These are the IM users. Right? Similarly, day one, day
two, day three, day four, day five. These are the IM users. Ops one, ops two, ops three. These are the IM users.
What are these? These are the IM users. And they are being categorically put into three different groups,
right? They're being intentionally put inside three groups. So, this is group one, GP1, group one. uh this is group
two and this is group three. Right? So what I do is that I group the people in three different categories.
Group one, group two, group group three. Right? Now what I do is that instead of assigning the permission to
every user, what I do is that I assign the permission to the group. For example, I assign a policy
simple policy administrator access. All the IM users under the group will get the same permissions.
Okay. For example, under operations, I assign a policy or two or three policies. For let's let's take two let's
let's take few examples over there. Let's imagine I assign two different policies three different policy. Let's
suppose I for the operations guy I just put in Amazon EC2 full access. Okay. And for example
autoscaling or okay um S3 S3 is a S3 bas is a storage service we'll discuss
afterwards. S3 read only access. So these are two policies I've assigned to the uh operations team. all the
people under this team or this group will get the same permissions. Okay. So this is a very neat and clean
and processoriented method of permissions management. So based on the best practices, we never
ever assign the permissions to the users directly. Instead we group them together. Uh we can categorically group
them together based on the jobs, job profiles, skill sets um uh based on their projects, their teams,
business units they belong to. We group them together in a group and we assign the permissions to the group or policies
to the group and automatically all the team members or users underneath will get the same permissions.
Now this will help you to manage your permissions effectively and efficiently in a big environment where you have
hundreds and thousands of IM users on board. And this is not a very new concept. uh
if you must have um you must have heard about active directory ad Microsoft ad uh or there are different tools we use
for account management they use the same concept of groups right so in this case you have to group the users together
it's a DIY exercise DIY stand for do it on your own exercise description of this hands or the problem statement let me
repeat what you have to do you have to open the following document which is this one and log in to the
console. This exercise, what's the purpose behind it? What's the what's the use of this exercise? The purpose of
this exercise is quite simple. It it will give you the insights as to how to create IM users
and then group them together. And while you group them, you assign them policies. Okay. What is the difference
between these two different components? Let's try to reiterate or refresh things that we what is what is this users
group. I'm just going to just refresh this concept in front of you. All right, let's understand what is let's reiterate
what is a user. Just a brief overview. I'm not sure why I missed how I missed that that document even I downloaded the
document yesterday. So what is an IM user? As we discussed that an IM user is nothing but it's a it
is it login to the root user and uh root user account and IM user is nothing but it's an entity that
get access to the root account and u it is used to ensure that you can allow the people in your IT software business
units and teams to get access to root accounts and they can do dayto-day today actions.
Now what is difference between a root account and IM account? Just a brief refresher before we can get started. A
root account has the complete access. It is a master account. It is uh the the master
and uh the master doesn't do uh it's not performed to do any type of menial or labor job. It's the the IM user who
performs all the actions on daily basis. So an IM user can be uh for example your software developer uh your quality
analyst u your Linux or database administrator network networking guy solutions architect system administrator
DevOps professional so an IM user is nothing but it is um it it it can access to the root account right the so the IM
user Steve gets access to the root account and the root account will create credentials and manage permissions.
Fine. So, we have the root account. Any organization who wants to get started on AWS or wants to sign up to AWS will get
started with the root account. Root account is not supposed to to perform any type of deployments or
configuration. Generally in a production environment, root account has been locked away. It is just used to sign up.
It's not it is not uh supposed to perform any type of deployment operations.
Root account will create some identities first and identity is an IM user. The IM user logs into the root account and root
account will give the credentials and manage the permissions for this account owner or this IM user. So all the
deployments and configuration are being carried on by the IM users. Now a root account will perform these two things
for an IM user. will perform authentication which means that will manage the credentials and also
authorization which means that manage permissions using IM policies and IM policy is nothing but it's a JSON
document which helps you to define the permissions of the IM entities what type of actions an IM user can
perform what type of access this person has is decided by the IM policy and we discussed the complete anatomy of the
JSON document or the IM policy how it looks like fine Now we also discussed one example. Um
I'm unfortunately unable to find this slide so I just logged in. In a big environment I'm just doing the revision
just to make sure that uh we can refresh the previous stage concepts in a large organization. It is it is a
very tough procedure to go person by person and start assigning the policies because the the chances are that you
start making some blunders or mistakes right because in in a business environment IM is a security tool you
need to understand this thing IM is a security tool you can you don't um have a you don't have you can't have the
appetite or the tolerance towards the security issues uh which can haunt uh your entire entire infrastructure or
your operations. So in this case what you do is that you group the people in based on different you group the people
and you group the people in the form of IM users or so we we discussed this example. So what happens is that when
you group people together for example these are the IM users. These blue boxes consist of IM users. These IM users get
access to or they put under they're put under
groups. So the one you can see in pink boxes these are groups. These IM users are put inside the groups or under the
groups. Okay. So they are the part of the groups. What you do is that you group the people or you you
categorically create different groups based on ski skill sets, teams, business units, job profile. And once you group
the people together, you identify the group of uh people working in in your organization.
Instead of assigning the permissions user by user or individual by individual instead what you do is that you assign
the permission based on based on the groups. So the policies are instead assigned to
the policies or policies are assigned to group. So once you sign the policies to to the group all the users or IM users
or or people under that group will get the same permissions. This is a very I would say processor
oriented and neat and clean and organized way of of permissions management because once you start
working in organizations you'll see that things are very very much are very well organized and structured. There's no
scope of mistakes uh or blunders. The things have to be organized. So this is a very organized way of permissions
management. What Amazon recommends is that even if you have a bunch of users, still use this methodology.
Still use this this methodology or this process of permissions management. Okay. Having said that, this is uh what we
discussed and also we'll discuss the policies. I've already given the brief overview. So in this hands-on we create
two or three users. I'll walk you through. It's a very simple process and uh you group them in the form of
developers uh in a group called developers and then you start assigning the policies to them. Now these policies
are basically the tools which developers will use to perform day operations and then you assign the uh these users to
the group. Okay. What we're going to be doing is that we're going to be performing the first three steps.
Okay. We'll be performing the first three steps. Search for IM under services. Just type in IM and just click
on IM and this will just take you to the IM dashboard. Let me know what's there. Just navigate to IM dashboard and please
let me know once you have reached that page. Okay. Once you go to IM,
right? So you just have to search for IM and it just take you to the IM service. You have to click on that. This is
called IM dashboard. Right. Now you will see that there are different resources under IM. You have for example you can
see you have user groups, users, roles, policies, identity providers. You can access resources by clicking clicking on
any one of these resources listed down or you'll see that you have the resources on the left hand side. the
same resources that we talked about uh that are that are there on this page of the main p display user groups users
roles policies and identity providers what I want to say is that want to
convey in this case is that you can just click on any one of these two places to navigate to resources
the resource that we're looking for is the users
I'm talking about this this one. So the resource that you're looking for is say the users either just click on the now
in case in your case users would be zero this is this is one user because I uh this is the one that I created just for
uh explaining the concept this you just click on the users which says number zero or you can click on
users in either of these two ways you just land on the same page okay you must have created one that's why you have one
whatever the number you have whe whether you click on the number or whether you just click Click on the users on the
left hand side. It will just take you to the same page. Let me know once you have once you have
navigated to users page tab and done once you're done. So in that case ga you have to uh work on on uh finding the
things on AWS you have to go to the menu page. See there's there's a bar on the top okay which says services.
Find right to services. is the search menu. Just type in three key alphabets am automatically the IM will populate.
You have to click on that. How to log into I as an IM user. So V you have to wait for some time. You have not reach
the destination yet. That's what we're trying to do. So you want to get the result before you can
can perform some actions. Wait for some time. That's what we're trying to achieve.
What's next? Not able to see IM option. That's quite strange.
There's no search menu showing up on the top. You don't have any search menu. There's a search bar which which has
this magnifying glass or uh the search menu. You just go to that search menu. Just
type in I for identity, A for access, M for management. You just type in IM and you will see a list of services in front
of you and you have to click on IM. There's something called add users on the right hand side.
Once I go to user details, what's the next thing? You have to start mentioning the user
ids or the usernames. What you have to do is username. Now username can have have any
name you can mention because username can be anything. For example, if I want to just type in my initial of
my first name R and then my surname Aurora, I can just put that like this or I can just type in Rohan and uh this is
my username. My first name is my username. I can for example type in Steve. That's a username. You have to
put a username and just choose this option provide user access to the ads management console
and then you have to give you so there would there will be a question asked to you are you providing
console access to a person choose yes I want to create an IM user so what you have to do is that you have to choose a
us uh put a just put anything Steve Chrisal put your first name select this option I want to give the management
console access to this user and in this question which uh in which you're being asked are you providing console access
to a person you have to select I want to create an IM user perform these three steps let me know what's done choose a
username as Steve choose provide us access to the console and choose I want to create an IM user what I'm saying is
that I'm I'm assigning a username for the user giving my consent that yes I want to
give this user access to the console. The D user uh access or the D accounts console access will be given to this
user and I want to create this IM user. Okay. What's next? I'm waiting for few more confirmations.
You have to put the username. You have to give your consent that you have to give the console access. and you want to
create an IM user. I'm going going very very slow. This is a terribly slow speed.
Okay, what's next? The next thing that you're supposed to do is that once you have gi give you given your consent.
Then the next thing is that you have to set a console password. You set a custom password. Now it says that it must be it
must be uh it must include at least three of the following list of characters. Uppercase letters, lower
case, numbers, special symbols, hyphens. Um you can choose any of the custom passwords out there. Um I'll just choose
one of the custom passwords or the passwords whatever whatever way you want to pronounce it. Go ahead and set a
custom password. Think of a custom password and set it. Nobody knows about this.
So the first name in this case is is the user ID and the custom password is the one that this user will be using to sign
in. So first name is the user ID uh the username that we just uh mentioned on the top and this is the custom password
through this user in the password. This IM user should be able to sign in to a root account. the root account we're
working on right now from his or her end. So you using his or her own credentials the IM user should be able
to get access to root account. All right. So I guess this is being done.
Okay. What's the next thing? The next thing is that you have to uncheck this option. Users must create a new password
and next sign in. You have to uncheck this option. Okay, what is this option is all about? The password that you set
in, it can be a common password for multiple users. Now, when you check this option, this means that the very first
time the user sign in, they're being prompted to change the password and mention their own passwords.
Now, this thing we don't need in this case because we're just doing purely hands-on. Uh so, in this case, you can
just go ahead and and make sure that you don't select this option. It is recommended but it's not
mandatory. So this is basically once the user signs in using the custom password that you set in immediately the
user will get a prompt to change the password and the user can can choose a password of his or her own choice. Fine.
So once you uncheck this option which is users must create a new password sign in.
Once these six items have been configured click on next. Okay. Now this is a page where you set the permissions.
The recommended option is that you add the users to the group. As of now we don't have have any group created. The
group will be created afterwards. So what we're going to be doing in this case is that we'll not set any
permissions as of now. Okay. We'll not set any permissions. We just on the set permissions. You don't have to anything.
Simply click on next. Let me know once you're on this review and create
on set permissions page. You don't do anything. Simply click on next. Just it done once you're done. Okay. So
in this case this is the review and create option. You have the user details. Basically you have set the the
username the password and the required password set to no. Permission summary is blank because we didn't assign any
policy to this user and tag value is not that much important. So we didn't mention any tag val. So this is not an
important thing. So review and create is a page where it shows you that this is you have uh you have create you have
configured before you you create this review the these parameters. We are okay with all these parameters right now. We
don't have to do anything else. Simply click on create user and the users will be created. Okay,
this can create user. So in this case, um this is what you get. You get this option where it shows you the consult
sign in URL. This link will be used by the IM users and username and password. Right now we don't need any one of these
details. We can exit safely from here and create a second user. So you will see that at the bottom right hand corner
there's a there's an option out there which says returns to users list. You have to click on returns to users list
and you get this prompt continue without viewing or downloading the console password.
You don't have to download anything as of now. There's an option that's called continue. Click on continue and we
you'll be on the users dashboard. You have to go to returns to users list and continue.
Okay. Now we have to follow the process again. We have to click on add users. Right. Click on add user again. And we
have to follow the the same process for the second user. For example, the second user named is Chris. I put it over here.
You can just put in any of your friends name, your name, put any random name, first name. You have to choose provide
user access to the spam one console and choose I want to create an IM user. Put the username again. Provide access to
the console and choose I want to create an IM user. Start the process again for the second user. Let me know what's once
you have done these three steps. Fine. I'm giving you a few more seconds. You have to put a username.
Check provide user access to the console and select that I want to create this IM user. What's next?
Set a custom password. Fine. Just go ahead and set a custom password. And
once you set a custom password, uncheck that users must create a new password at the next sign in. So set a custom
password. And please uncheck this option. This means that uh user would be able to access the root account via
console. There's another method that's called CLI command interface or or maybe third party APIs u in which third party
programs can be also used to access root account. See for example if you use CLI okay let's suppose we if you use CLI uh
so we can also give access to the CLI part of it. So there there are multiple ways to access AWS uh account. This is
the web uh this is the the web based or GUI. Uh this is the management console that we get access to. The second method
is the the CLI part of it command interface in which we execute some commands and access uh the servers and
deploy them. And the fourth part third part is API or or we use SDKs. So in those cases there are multiple methods
or methodologies we can use to log into AWS account. Now if I say I want to give the the management console access this
means that this IM user will be able to access root account by logging in to the management console. Okay. What's next?
You don't have to anything. Click on next. Don't have any set permissions. Click on next and click on create user.
So click next twice. Click next twice. Next, next. And create the user. Basically, you don't have to uh set any
permissions as of now. There's no need to set any permissions. Click next. Next. Click on uh click
click next. Next, and create user. Next will take you to the set permissions. We have to skip it again. Next. It will
just take you to to uh this review and create page. You have to skip it. Click on next. Uh click on create user and the
user will create it. Click next. Next and click on create user. So click on next. You'll get to
set permissions page. You don't have to set any permissions. Just skip it. Click on next. You get review and create page.
Click on next and create the user. So click next. Next two times next and click on create user.
Now click on returns to users list and click on continue. Right? Click on returns to users list and click
continue. Return to user list and click on continue. Right now as a challenge you have to
create a third user by yourself. I help you out right now. Of course, I'll be creating the third user and show you.
But as a challenge, you have to create a third user with the same settings. You choose the same password. Of course, the
username should be different. The username should be different. But as a challenge, create a third user by
yourself. That's great. Okay. So, I click on add user in my case. Name the the last user as Kunal.
Right. Choose provide user access to the console. Choose I want to create an IM user. Right. Choose a custom password.
Okay. And then I uncheck this option. User must create a new password at the next sign in. So the same thing you have
to do. You have to choose a username. Give that user access to the console access. Um
you want to create an IM user. Set a console password. And then you uncheck this option that you that the user must
change the password. Click on next. You don't have to reset any permissions. Click on next and click on create user.
Click on return to user list. Click on continue. And you're back on the the user dashboard. And as if now you have
three users that you have already created on this IM dashboard. So this has been done. Okay, we are on the same
page. So as of now we have performed the first three steps successfully. Okay. Now let's do the last three steps.
These three users will be now formed. Let's was these three users are my software developers.
They need access to certain tools. Now if I just open uh the list of services in okay let me just do one thing. Let me
just open in a new tab. Can you see that if uh you don't have to just go to your page but if you just go to view all
services you will see that uh on the AWS maven console there are a set of developer tools okay these tools are
required by developers to uh work on the development of some applications release new software and so forth. So you have
developer tools like for example code start code commit code build code deploy code pipeline uh x-ray cloud shell
cloud9 for example these are the developer tools right which most of these tools so which of so most of which
will be used by developers I want these software developers to access some of these tools
you don't have to learn about about these tools it's not in the scope of this course but just to apprise you of
the fact that these developer tools will be used by developers a lot. So we have to give access to them the basic for so
they can they can do the basic development on on cloud. For example, code pipeline is used to create the
entire uh uh entire um it uses the method of CI/CD continuous integration continuous uh deployment. So the entire
pipeline is created u starting from uh the commits uh to the deployment. So it creates the entire flow of how the
software is released. I'm giving one example now. I want this that all these three users will will have certain
access to some of these tools under developer tools. They should they should be having access to some of these tools
out there. In this case what we shall do next we should be going to user groups on the left hand side. So under on the
left hand side where you can see the menu options we have users we have to go to user groups okay and you can see the
definition a user group is a collection of IM users use groups to specify permissions for a collection of users so
click on users groups on the left hand side let me know once you're on this page
okay what's next um you click on create group and it says user group name enter a meaningful name
to identify this group. Fine. Click on create group. You name the group as developers, right? Just put a name and
select all the three users. So click on create group. Enter a meaningful name. For example, developers and select all
the three users in the list. Let me know what's wrong with this thing. So I'm saying that the all these three users
will be part of the group called developers. They will be having the same access to to my developer tools. the
same type of or the same level of access or same type of access. Please go and do that.
Okay. What's next under attach permission? So this has been done. You create a user group name and then you
have added the these three users. What's the next thing? It says attach permission policies.
I've listed down the these four policies that I need to or these type of four permissions.
These four permissions I want to give to my IM users. The first one is AWS code commit full
access. This is basically used to to host private key repositories.
Now you can search for the policy. For example, there's a search menu. It says filter policies by property or policy
name and press enter. For example, I'll just type in C O D code
and C O M I code commit. Press enter and I will see that I will see a list of these three policies.
Please search for code commit and let me know once once once you are able to list down or filter out these three
policies, right? Right now you have to once you just
search for code commit you have to select AWS code commit full fu access right policy selected. If you see on the
top even I was confused initially when I started working on this dash with this dashboard they have just changed a few
months back. Can you see that in your case it will show you selected one out of and the the total number of policies.
So selected one out of the total number of policies. This means that this policy been selected and this is applicable to
all of you guys. Once you it is showing four must have selected four. That's that's why whatever policy you have
selected it will show you those policies only. No worries if you selected four by mistake. That's okay. No worries. Don't
change anything. Fine. So the overall numbers can change because I I have some customized policies. That's okay. I have
some uh some additional policies added. Uh I have customized them. That's why fine. Okay. So you have added one
policy. I guess we're on the same page. Now you will see that there's an option which
says clear filters. Now you have to clear the filter and look for another policy.
Okay. So click on clear filters so that the filters have been removed and you're back to the same page where you need to
you can filter the policies. The next policy that you're looking for is AWS code deploy full access.
Now either you can just put in code deploy for access or just type in code deploy and just type in fl
right and you you need to select code deploy full access. Go ahead and do that.
Basically this is a tool to release the software or upgrade the application version on multiple test at the same
time. Please go ahead and do that. Just type in done once you are done with this thing.
Okay, done. Now the same thing you will see that on the top it says selected two out of 850 in your case. Right? The same
thing you have to do now you have to click on clear filters. You have selected two out of the overall
number of policies you have. Now the next policy looking for is code build developer access. This is do for uh
building a uh a build and test server. It is a part of the software development life cycle.
So it is code built developer access. So you can just copy and paste as it is or just type in code c o d e build
or you just press enter. You will see that this it says code build to braces and you will see that once you've
selected it it will show you selected three out of 880 and we'll show you uh that this is being
selected. You have to go ahead and clear the filters
right now as a challenge you have to search the third fourth policy by yourself and select it which is Amazon
S3 read only access. go ahead and just do the stuff by yourself. I'll show you, but just to make sure that you just do
do it by yourself, search for the four fourth policy and select it. Now, the fourth policy that I'm looking
for is S3 read only access read. If I just type in S3 read as a keyword. So, basically I'm just making you
comfortable looking for the information on the management console. So that you don't have to struggle later
on afterwards when it must you have to start. See that it's a skill that you have to adopt that whatever information
you're looking for you can just search by by by by tapping some keywords and the information comes in front of you
whether it's looking for a service looking for a policy you have to attune yourself or adapt yourself in searching
for the information looking for by just tapping certain keywords I choose to read only access and you
will see that on the top it says selected four out of 880 and I click clear filters to clear the filters out
and this has been done. Fine. So overall so far so good. We have grouped the people together under the same group
name as developers. These three developers have been selected and u we have added four policies out of 850 in
your case. Right? We can attach up to 10 policies to this user group. Fine. So as if now it's been done click on create
group the bottom right hand corner and it says uh so I go to user groups right now my group has been created let's do
that okay so I click on the group name so this is my user uh group called developers these are my three users fine
and if I go to the permissions on the right hand side these are the permissions added
This means that all these three permissions yeah it's center permissions all these four permissions I'm sorry all
these four permissions estrad only access code deploy full access code commit full access code build developer
access all these four policies are under the permissions menu all these four policies are being applied to these
three users testing part is really easy uh there's nothing too much technical about testing the things out. We need to
understand one thing that when you need to test an IM user, you have to either log out from the root user account
because you can't have two login sessions on the same browser. Either you can uh open the private
window or you can use a different browser. So for example, if I go to Google Chrome, right, there's a there's
a option called new incognito window, right? So for example on the main browser window I have uh uh the root
user logged in. I I've logged in as a root user in I just go to file and choose new incognitive window and the
incognitive window I can sign in as an IM user. So on the main uh main Google Chrome browser session I can just log in
as a root account owner and uh I can open an incognito tab and sign as an as an IM user. If you have two browsers, I
think most of us have two browsers. Maybe you have Mozilla Firefox or Google Chrome or Microsoft Edge, whatever you
have, you can sign into a different browser or you can use a different browser and sign in as a root user. The
crux of the story is that you can't have two successful login on the same browser session or on the same browser. Either
use a incognito window, private window or use different browser for for IM users to sign in. Fine. Now how an I
user will sign in or what's the process? Uh there are multiple ways how to find that uh link. But if you could just go
to a dashboard, right? You just go to dashboard and you pay attention to the right hand
side of this page. This one right hand side of this page you will
see that this is this this uh there's something called sign URL. this one. Okay, this is the link that your IM
users would be using to sign in. Now, they can also go to as uh aws.mazon.com and sign in as an IM user. They have to
put in the root account ID and that user and password. That's that's can also be done or you can just give link to the IM
user and this link already contains the root account ID. Right? If you see at the top right hand corner there's a
every root account gets a special unique root account ID. So the format of this link is https colon the two forward
slashes. Then this is the root account ID that you have dots signin.awws.mazon.com/
console. So this link can be used. This link can be used. So in this case what we're going to do next is
you just go ahead and copy this link right
first first first of all please find this link and copy it let me know once you have done this thing you have to go
to the dashboard on the left hand side right just click on the main dashboard and on the right hand side you'll find
the under the account you'll find the uh the the link you have to copy this link Just do that.
I'm not able to see permissions where exactly we right now we already bypassed or we have already surpassed that
permissions um question that you put in. I've already shown you how to see the permissions. Now you're not able to see
the permissions but you're still working on the on the groups configuration. Okay. Now you just copy this link. So
I'll use Google Chrome in this case and paste this link. Right now of course uh it will so it
will automatically pick the root account ID. You put the username and the password. Click on sign in and then you
should be able to sign in on behalf of this IM user. Right now this IM user is trying to access your root account only.
This is not different account. Steve in my case is an IM user who is trying to or who has accessed
D users account by logging in via his own credentials. If you see the top right corner it says
account ID. This is the account ID of the root account and this is the IM user Steve.
Now if you go to any of the services for example you go to EC2 right? You would just go to E2 right now you'll get that
API error message. Why? Because the policies that we selected the policy uh does not the the
policies they they don't include or there's no inclusion of the uh the the EC2 access over there. Fine. So in this
case this is the API message you'll get. If you go to ES2 dashboard or you just go to the main page,
you try to access any of the service, you'll be basically being um you'll be get that insufficient error message or
you'll be kicked out of out of that specific service because you don't have access to. If you go to view all
services and these are the the the developer tools you have for example code commit. You go to code commit,
you'll not find any error message, right? The reason being because uh the reason being because code commit you
have full access to it. So if you go to any of the service it will show you insufficient error message API error
message. But for example if you go to code commit you will see that there's no error message it's showing up fine.
So you can login and just try to access some of the services or try to access those services for example code commit
code deploy code pipeline uh s3 read only access you will see that there will be no error message that will
get but if you just go to any of any of any other services haven't any policy you'll be start getting the error
messages please go ahead and navigate through these options once let me saying I forgot user credentials you
have to go to users users list. Click on the user and under users you have security
credentials. Under security credentials you will see that it says console sign in. Click on
manage console access. Over here you can set a custom password. All right. So you have to go to users.
Then you have to go to security credentials. users security credentials and click on
manage console access. Once you go to this third option, you'll automatically get the option to reset the password.
Again, if you can follow the instructions by yourself, I've already shown the instructions. If you do do it
right now, do that. Otherwise, there's no need for testing. It's optional. Fine. This completes the the concept of
users and groups. Not able to find the dashboard. Let's navigate to the next concept.
The next concept that we shall be discussing is a concept of policies. We discuss about the policies. What's a
policy? A policy is a JSON document that helps you to define the permissions of IM
entities. Okay. So if you see the complete anatomy of the distant document the version
statement uh effect actions resources uh all these things have they are being put under the u the uh the or these are
the part this this these are the part of a JSON document. Now we discussed I policies basically are used to define
the permissions. Let's understand the two types of policies that you can you can leverage. You must have seen those
two types of policies but I'll I'll uh demonstrate and explain you in depth that what's the difference between these
two different types of policies. So okay what are the policy types under the policy types there are two
different types of policies. Number one, we have the policy that says that is AWS managed.
AWS managed is a a policy type. The next one is customer managed. These are the two different types of
policies that you can start leveraging. Now what are the differences? The difference are or the differences are
that the first major difference is that the managed is a default one. Default means that it is there for your
use. You don't have to configure it. It's it's already being created for you. You start using it.
Fine. The custom manage policy in this case is customized is customized by is customized by Atos customer. Who said it
was customer? We are the customers. We are the customers, right? We use AWS for our deployment. So customized
by AWS customers. So we can customize uh and create our
own custom tailored policies. The second thing so ads manage is a default policy and
customer managed is customized by ads customers. Fine. In terms of AWS managed what happens is that this is basically
useful those cases in which you are not looking for any type of customizations. I I would say that the ad managed are
there by default. they fail to provide you or they I would say that they don't provide
flexibility or customizations. I'll give you a few examples afterwards
but they are not being customtailored based on special business requirements. So these it it doesn't provides no
flexibility or customizations. So it doesn't guarantee any type of
flexibility customization for you and it's there. It is created and you start using it. No flexibility, no
customizations. On the other hand, it gives you flexibility and customization options.
So in this case what happens is there will be few scenarios where the AWS managed policy may not be suitable
for you. Few situation of course there are more than 800 ads manage policies you should be using them but there could
be few scenarios I think there was some question put in that had put that question that for for example I want to
make sure that the user gets complete access to the EC2 dashboard which should I need to refrain that user from
terminating the instance users user should get complete access to EC2 apart from terminating instances now that's a
special requirement that requirement cannot be alone fulfilled by AWS managed
because ads managed policies are set. So in this case you start using customer managed. Customer managed policies are
the one that you can use based on your requirements or needs which which may change over the time. See there are more
than a million customers on NATO players platform. it is it is kind of hard for for to customize the uh manage
policies for those more than 100 billion customers because uh businesses have some special needs some special
requirements which change over the time. So Amazon says we give you a bunch of or more than
800 policies which we think would you're going to be using it a lot but in few cases when you think that you have to
create your own policies because you have to set up your own infrastructure you may have some uh some special needs
and requirements that you should be taking care of or should be working on them. So in that case you create a
custom manage policy. We give the tools to you to create a policy by yourself on your own. So in that case you're going
to be using the custom manage policy. Okay. So I give the example which pashad has given. For example I let's suppose
Amazon EC2 full access is a what is this? This is a manage policy. Full access means that user can even
terminate the instance. But the problem is that I want that user should should get full access except terminating
instances. In that case, I need to create a custom manage policy saying that Amazon EC2 full access is given to
the user except terminate instance. So this in this case I have to create a customized
manage policy and apply it to my users or the groups. Fine. That's where the customization comes into play. Now let's
take a scenario. I give you Google Drive links to share with you with you the documents. What is a Google Drive? A
Google Drive is nothing but it is a what is a Google Drive? A Google Drive is nothing but it is a um it's a online
backup storage or online storage service. Google Drive, Dropbox, Microsoft One Drive.
What are these? These are the online backup options. Virtual storage you can say that right now when I put something
in the in the Google Drive, it's not necessary I have to put the same data on my hard disk on my computer's local
disk. I can have some documents on the cloud. I can share the links with the users uh and share the documents, share
u some data with you. Fine. So in this case, we use these tools as online backup storage. S3 is also one of the
online backup storage options which Amazon has built on its own. So if you go to SM console, uh there's a category
that's called storage. telling you for the sake of understanding there's an option called S3. S3 stand for simple
storage service. It's an online backup storage that you use for your businesses.
Simple storage service or S3. Okay. So in this case simple storage service S3 what is this? Uh basically it's an
online backup storage. It's an online backup or storage service. Okay. And it helps you to uh put a lot
of unlimited amount of data that can be accessed from anywhere on the web. So it allows
you to put unlimited amount
of data that can be accessed
that can be accessed over the internet. Right? So you can put untold quantities of data. There's no any uh limitation.
You can put any amount of data that can be accessed over the internet. Okay? So it's an online backup storage service
which Amazon gives to you. Now there are two building blocks of S3. Two major building blocks.
Number one, the number one building block is an S3 bucket. So for example,
let's imagine this is an S3 bucket. All right. So what is this?
This is an Amazon S3 bucket. Now, what is an Amazon S3 bucket? It's a fundamental container.
What do you mean by fundamental container? It's a kind of a a storage option to store your data. It's a
fundamental container that can contain your data. Okay. So, an S3 bucket is a kind of a container.
The data that you put inside this bucket is called an object. So in this case an object
it's nothing but it's a fundamental entity. It's a fundamental entity. So in this
case object is a fundamental entity that you put inside the bucket. Now an object is a simple file right? For example, it
can be a PDF. It can be a a document. It can be a spreadsheet, a PBT, a PNG image. It can
be a JPEG image. It can be for example um an HTML file,
a CSS file, a Python file, a Terraform file,
JSON file, EML file, AVI file, MP3,
MP4, etc., etc. So the data can be of any format.
The object that we put in can be of any format. So there is no hard and fast thing. The data can be of
any format. Any format. PDF, DOCX, S, XLSX, PPT, PNG, JPEG, HTML, CSS, Python script, Terapform file, whatever the the
the uh the format is, it does not matter. You can simply put a data or object
inside a file inside a bucket and you can store it and you can store any amount of data inside it. Basically the
storage size is unlimited. AWS has not put any restrictions upon it. So the I would say the storage
capability it is kind of unlimited. The storage capacity is kind of unlimited.
There's one more thing an account if I'm talking about account I'm talking
about root account only can have maximum of up to 100 of these S3 buckets
this is a limitation fine if I show you some policies on S3 we have used one of the policies uh that's called S3 only
access right you used one policy that's called S3 only Let's imagine
let's go to policies. Let's take one example over there. Let's go. So these are my policies. For
example, I just type in S3. Okay. Under S3, I have these policies. For
example, uh let me show that S3 read full access read only access. Right? So these are the the two main policies I
have for the S3. If I target only S3 buckets or S3 service full access and read only access full access will give
complete access user can put some data take some data out read only means he can only read the data inside the bucket
he can't remove or put some data okay S3 full access and S3 read only access okay so what do you mean by that I'll list
down some actions under under S3 full access and S3 read only access under Amazon
S3 full access policy. These are the actions allowed. One is list action. What do you mean by list
action? I give a simple uh uh simple example over here. If I go to my Google Drive, if I go to my Google Drive, I can
see a list of files. First file, second file, third file. This is list. If I go to my main Google Drive folder,
the main drive, I can see list of folders, right? What is this? This is a list action. I can list the the main
folders and the files within the folder. Okay. Consider S3 bucket as a folder and files
are as objects subfolder. Yeah. So in this case what happens is what when when uh when I go to the S3 for example let's
let's suppose I go to S3 I go to S3 right I can see a list of buckets of 26 buckets buckets are the containers for
data store these are the kind of folders or uh containers I can list see a list of 26 buckets
if I go to any of the buckets ETS I can see a list of objects or the data inside the bucket.
This is called list action. Okay, understood. Now for example, I want to get the data out of it.
I want to uh download this Linux mastery.pbtx or terminal saved out. I want to
download this this file. I choose it and can you see this option called download. I can also download the same I can let's
suppose I want to download some file from uh Google drive. I right click and what I do click on download right. So I
right click on the file and click on download. Similarly I can just go to the bucket right click or I would say I
choose it go to download. Right. This downloading action is uh or getting access to that data is called get
action. What is this? This is called get action.
Okay. Now the Amazon S3 read only access has both these actions allowed. List action
and get action. However, S3 full access has two main actions that
it allows. One is put action. put actions mean I can put some data inside the S2 bucket.
If I go back to the um buckets dashboard, can you see there's an option that's called upload.
This means that I can upload a file to the sister bucket. I go to upload, click on add files. Now you don't have to
learn this thing. I add a file and choose upload. So uploading the data to the S3 bucket is what? Put action. And
if I delete the data from the bucket, for example, I upload the data. I click on close. I choose the RTF file
and click on delete and I just type in for example permanently
delete. So what is this? This is called delete action. What is this?
This is called delete action. So full access will give me uh list action, get action, put action, delete
action. Amazon S3 read access has only these three action two actions allowed list and get. Listing the items, getting
the items out of it, putting the items, deleting the items are come under the full access. Read access will only allow
to list and get the things. That's it. You can see list of items and you can get the items out of it. That's it.
Fine. Let's do a scenario based on this thing. What the scenario says? The scenario is I have let's suppose 100
buckets under root account. A root account
has 100 buckets. On of these these 100 buckets, I have two buckets which I want to
target. I want to work on these two buckets. This is my
bucket one. This is my bucket two. Okay. Now,
out of these 100 buckets have two buckets. I want an IM user whose name is Steve
to get the read only access to my one of these two buckets or 100 buckets and full access to the bucket two and
no access whatsoever. no access to
rest of the 98 buckets or rest of the buckets. Whatever the buckets you have, rest of
the buckets absolutely no access to rest of the buckets.
Based on this scenario, we'll be performing a hands-on and create a customer managed policy
via which user gets the read only access to one bucket complete access to the second bucket and rest of the bucket
should be completely denied for him. Just a quick info guys, Intellipath brings you executive post-graduate
certification in cloud computing and devops in collaboration with Ihab Dya Sampar IIT Rudkey. Through this program,
you will gain in- demand skills like AWS, DevOps, Kubernetes, Terraform, Azure, and even cutting edge topics like
generative AI for cloud computing. This 9month online boot camp features 100 plus live session from IIT faculty and
top industry mentors. 50 plus real world project and a two-day campus immersion at IIT RII. You also get guaranteed
placement assistance with three job interviews after entering the placement pool. And that's not all. This program
offers Microsoft certification, a free exam voucher and even a chance to pitch your startup idea for incubation support
up to rupees 50 lakhs from IAB Diva Samurk. If you are serious about building a future in cloud and DevOps,
visit the course page linked in the description and take your first step toward an exciting career in cloud
technology. First thing first, it says create and attach your first customer managed
policy. And this is the complete workflow which has been broken down into into five basic steps. The first step is
to create two two Amazon S3 buckets. And it makes sense because uh based on the scenario that I have laid down uh in
front of you or I've jotted down the complete scenario, we need two buckets. Then based on the two buckets uh we need
to create a customer manage policy that this is an important stuff because this is where we'll be spending most of our
time and effort. Then we have to create an IM user and attach the custom manage policy to that user so that we can see
whether that user gets the the level of all or the type of access that we want to give and then we have to do the
testing stuff or we will be testing this entire uh implementation afterwards. In step number one, we have to create two
Amazon S3 buckets. Now buckets uh to create the buckets is it is one of the most easiest part out. Okay, I'll show
you how to create the buckets and I'll show the step from one and 1 to 7. Okay. Now you have to create the
buckets with the with the with default values which means that if I haven't mentioned anything in terms of uh the
steps that you have to do to create the bucket if it's not there in the document then this means that you don't have to
do that step. So you have to follow only those steps which are there in the document. I've excluded other parameters
which are advanced in nature because right now we don't have to go to the advanced level of a string. You just
have to have two buckets for us for for this hands-on. So you can search for S3. Okay. Now once you just type in S3, it
says services and you will see S3 scalable storage in the cloud. You click on the S3 and this will take you to the
S3 dashboard. Now even the S3 dashboard you will see that it is global in nature. Global means global doesn't
means that the bucket a bucket can be uh can spread across all regions. uh we have this global uh kind of interface.
The reason being if you see my bucket list all the buckets that belong to two separate regions can be seen on on a
single page. You don't have to switch back and forth or navigate between different regions to access the bucket
in in separate regions. So once you go to global page you can see the list of buckets on one page that reside in
separate regions. Fine. Now let's do one thing. I'll just show you
how to create a bucket. And there there this there's one thing that we have to be aware of. Uh we'll be adding some
prefixes for our um convenience so that we can easily differentiate between these two two buckets. I click on create
bucket right and it says bucket name. Fine. Now the bucket name first thing first has to be unique which means that
no two buckets can have the same name. Okay, bucket names must be unique. So this is a important fact. Bucket name
must be unique which means me means that no two buckets across AWS in AWS infrastructure
or platform no two buckets can have the same name. For example, no two people have the same uh Gmail ids
or no two people have the same phone numbers, right? So in this case, no two buckets can have the same name. So you
have to think of a unique bucket name and make sure that you choose lowerase only because no uppercase or no I would
say you can't use caps, no capital letters allowed. So lower case only. Fine. You can it can be alpha numericic.
You can use a combination of numbers and alphabets to choose the name. It can be alpha numeric. And there are only two
special characters allowed. What are those two special characters? The two special characters that are
allowed are hyphen and dot. Right? So these four things you should
be aware of. The bucket names must be unique which means mean means that no two buckets can have the same name. For
example, if I just put the bucket name as test right and try to create a bucket it will throw me in a message bucket
with the same name already exists. Fine. Now you have to think of unique bucket name. And what you can do is that you
can just type in your for example your first name and just type in for example um just type in AM
policy for example IM policy demo right and just to make sure that there's
a we'll add because we we create two buckets we add two prefixes. I will add one prefix for uh like for example o n e
one for the first bucket or just type in first f i r s t right because we'll add the the prefix first for the first
bucket and prefix second for the second bucket or you can just type in o1 or tww2 for the second bucket it's all up
to you so for example I put in uh rohan uh hyphen IM policy demo hyphen first and the second bucket would be this one
for example This will be my second bucket name. I'll not put it right now over here. So for
example, I just put second for the second bucket. Just for the sake of sake of example I'm giving to you. Right? So
that's how you have to differentiate between these two different buckets. Right? Go ahead choose a name. Make sure
that you you prefix or append first or o at the end of it. Go ahead and do that. Now you can just copy this bucket name
because we're going to be just uh pasting it for the second bucket name. But we'll just change the prefix of it.
You can just copy it. Right? I just uh right click and copy it because the second bucket I'll be creating will will
have the same name except I'll be putting second as a prefix for this bucket. For the second bucket, please
assign a bucket name. You can put your first name. Right hyphen im policy demo hyphen first because some people may
struggle to choose a bucket name because bucket names are are unique and uh you may be trying to choose a bucket name
which already exist. So to for simplicity and to make sure that you don't uh face issues, put your first
name hyphen im policy demo hyphen first. Okay, I'm just putting in yeah let's go and do that. Now the buckets are
contained inside region. You should not be bothered about the region as of now because we're not doing any any advanced
hands-on in which we have to be contain oursel in a specific region. But just an FYI, you can choose the region of your
choice. Whichever region is selected, don't be bothered about it. You simply have to go at the bottom. There's a an
option that's that is create bucket. Click on that and your bucket gets created.
Those regions are disabled regions on your account. That's okay. Don't be bothered about it. Go ahead and click on
create bucket. Go ahead and click on create bucket. Just tap in done once you're done and
this is what the bucket you have. So in my case this is my first bucket created. Are we
done with this? We need two buckets start to get started.
Based on these two buckets the we will be configuring the customer manage policy.
We shall just perform this operation again. We click on create bucket at the top right hand corner.
Fine. Now paste the same bucket name but just remove first at the end and put second. That's it. If you have put one
for for the first bucket before just put in TWW2 for the second bucket so that you can differentiate between these two
different buckets that you have. This will be help us to easily differentiate between them. Fine. Of course the bucket
names are unique just for the sake of uh convenience. just having the same bucket name, but we're just changing the
prefixes. Once you put second for the second bucket, go straight to the bottom and
click on create bucket. Please do that. Let me know once your buckets have been created.
As you can see, I have two buckets. One has a prefix as first and second has a prefix of second. prefix or suffix
whatever you want to say. So I've added first at the end of the first bucket name and second for the at the end of
the second bucket name done fulfill the first prerequisite or I would say we have performed the steps till
seven right so we have performed steps 1 to 7 fine are we all done we have the two buckets in front of us that
we're going to be using or we we'll be using these as objects or I would say these as the target points
to create our policy. Okay, great. What's the next next thing? Now in the next step is that we have to
start working on the policy configuration. Now this is where uh I have to go very slow because this is the
very first time you are getting engaged in the process of creating a customer manage policy. The process is quite
simple. It is just that once you perform this once or twice, you you become aware of the entire process.
So I'll we'll just go with the second step. The second step is step number two create a custom management policy. Right
now what you have to do all of you what you have to do is that um you can have this this S3 dashboard opened up. Don't
close this page. Just just type in AM. Now you have to open an IM tab. So what you can do is that just open IM right
click and you can just open that in a new tab because why I'm saying to to open that in a new new tab because you
have to copy and paste this S3 bucket name afterwards it will be easier for you to do that. So I open IM in a
different tab right. So I have S3 dashboard on the left hand side. On one tab it's opened up. On the second tab
I've opened IM. So please go ahead access the IM dashboard. You can for the convenience
try to use open that on a different page on a different tab. Please go ahead and do that. Just tap once you're done with
this. Now once you uh go to the IM dashboard you're going to be looking for IM resources you have the policies in
this case now I guess you must have zero policy one two pol basically this shows me that the number of customer
management policies I have right now 30 custom managed policies created you may have 012 whatever number number you have
that's that doesn't makes any difference or you can click on policies on the left hand side you go you click on either of
these two options you land to the same policies page. Okay. So once you land to the IM dashboard, click on policies. Let
me know once you're on the policies page. Just tap and done once you're done with this policies page under IM. Okay.
I on the policies page right now. Now what's a policy? It's an object in AWS that defines permissions.
Fine. There are two types of policy that I've discussed with you. One policy is customer managed and the policy is AWS
managed. We're going to create a customer manage policy based on the two buckets that we have created. I click on
create policy at the top right hand corner and this gives me a page in front of me.
Let me know what's on this page. Just type in done. It says specify permissions policy editor. So click on
create policy and you will enter this page. Now it says select a service.
This grid demonstrates to you that you have to list of services autoscaling, cloudfront, EC2, IM, Lambda, RDS, S3,
SNS, right? These are um this is the combination of uh the uh services that you can see. Now you need to click on
S3, right? because uh the service that we would be working on of the or the service for
which this custom manage policy would be configured is S3. So we can we are trying to create a custom manage policy
for the S3 service. Fine. You have to click on S3. Click on S3 and then it takes you to the
S3 uh configuration page. Click on S3. Let me know once you are on that page. Please go ahead click on S3 let me know
once you're on on that specific page I'll show you one more time just in case if uh some people got confused click on
create policy right you will see a a list of services in front of you now the service that
you're looking for is S3 click on S3 and this is this will this is where you have to start applying the configuration or
you have to start implementing the steps by S3 based on the scenario that we are
working on we are we need to create the policy for S3 buckets that's reason based on the scenario that uh we are
working on we need to create the policy for S3 that's reason
I hope that you know you are aware of the scenario now what's the next thing the next thing is that it says actions
allowed all actions The answer is no. First thing first,
we'll be targeting our bucket which ends with first or O. My intention is
or I want to do this thing is that I want that the user should be able to get read only access to the first bucket.
Read only access to the first bucket. Regarding the second bucket, the user should be able to get the full access or
complete access. For the second bucket, the user should be able to get the complete access.
We'll first target our first bucket. For the first bucket, we want to allow read actions. So in this case I just if
I go back to my this policy editor you will see access level list and read fine you have to expand list and choose all
list actions please go and do that okay now once you choose all the list
actions let's go go to read okay go to read and choose All read actions. Please go ahead and choose all read
actions. Let me know about this. Now in the read you will see that it says get
so that we can get we can include get action especially we can get the data out of the bucket. Select all the read
and list actions. Step done. Done. Are we done? Have we selected all the So you have to
select number one all this actions number two all actions. Are we done with this? Because right now we're targeting
our first bucket. Now what you can do is you scroll down. We're not going for
right permissions management tagging. These things would be uh excluded. Then go to resources.
We have already chosen which actions list action and read action. Especially read action I've listed all the get
actions so that I can read the data. I can get the data out of the bucket. Now if you want you can click on
collapse all so that you just collapse and uh the things become very neat and clean you're easy easy to navigate. You
just click on collapse all and basically all the the options get collapsed. Basically the main thing is that you
have to ensure that you have chosen all the actions under list and read fine. So the page becomes uh neat and clean and
tidy. The next thing is that you have to go to resources.
The first resource that you should be targeting is the bucket. Now you have to apply the permissions at two levels.
bucket level and object level. What's a bucket? It's a container. Right at the bucket, it says get bucket
resource ARN for the get bucket location and 48 more actions. It's saying that whatever actions you have chosen, that's
okay. But can you please specify which bucketing you're targeting? You may have 100 buckets in your account, but which
specific bucket you're talking about? There's a there's an option that says add ARN. Okay, what is ARN? I'll let you
know. But click on add DRN and let me know once you're there. Just going to add DRN. Write the bucket
name. Let me know once you're on this page. Just type in done. Once you're done,
I'll show one more time just in case they've confused. There's a there's a separate section for
resources. Okay. Now, we have to target two resources. Bucket and object.
Under bucket it says specify bucket resource ARN for the get bucket location and it says add ARN.
I click on add ARN over here and this is what I get right now. I can just put that the complete text of it right now
but line by line but I'll just go with visual. Let's suppose if I choose any bucket name it it says a star. What do
you mean by star? We don't have to select it. We'll be mentioning our own bucket name. It means all the buckets.
So if I by mistake choose start, this means that this policy I'm going to create right now applies to all the
buckets which is not the case not in our which is not applicable in our case. So you have to unselect it and you have to
specify the exact bucket name. Okay. So go back to the bucket dashboard, copy the first bucket name as it is and
you're going to be pasting it automatically the the Amazon resource name of the bucket. Now ARN is a special
identifier for specific resources. The the ARN of your bucket populates. So you have to copy and paste the first bucket
name as it is in any bucket name list. Please go ahead and do that. So basically it will automatically
populate the ARN of your bucket. Fine. Now once you have put your first bucket name. So ARN stand for Amazon
resource name. It's a special identifier you give to a bucket which Amazon gives to your buckets and some resources.
Once once you have done what what next? Click on add ARNs. Add ARNs. Can you see that there's an
option at the bottom right hand corner uh the option is add ARNs. Go ahead and do that. Done. You will see that once
you click on add ARNs your first bucket uh ARN or Amazon resource name populates. Are we done with this? Right.
So you right now you you've targeted your first bucket. You're saying that this read only access I'm going to
mention for the first bucket. Okay. What's next? You scroll down and now you have to
target the objects. What's an object? It's a file or data that you put inside the bucket.
So once you scroll down, the last item it shows in the list and this entire list is object.
It says add ARN to restrict access. Go ahead and click on add AR and write to the object name
and this is what you'll get. Click on add AR and write to the object name. Let me know once you're on this page.
I'll show you one more time. You have the object as the last as the item last in the list.
You have to click on add ARN right to the object name. Okay. Click on add ARN. Fine.
Once you're on this page, you have to again copy and paste the first bucket name. Please do that. What's the next
thing? Under the resource object name, choose any object name. So what do you mean by that? This means that this
policy applies to all the objects in my first bucket. You have to again I want you're saying that I want to apply this
thing to all the objects. What is a what is what does a star stands for or this file notation means all the objects in
this case the the previous step targeted the specific bucket. The the second step targets the the
objects inside the bucket. So the previous step target targets the the buckets that I want to uh I want to
list. Now once I measure the bucket name now it asks me that which object inside the bucket comes under this policy.
Okay. So when you apply the permissions at uh permissions to restrict access, you have to apply the permissions at the
bucket level first and then to the object level because maybe you have some objects that you want to restrict and
other need to be uh need to be taken out from the restriction. Fine.
Have you done with have we have you done this this thing? You have again you have to mention the copy and paste a bucket
name first bucket name and under resource object name you have to choose a check or you have to check any object
name which means that all objects in my first bucket will be included or would or the policy that I'm going to create
right now would be applicable to all the objects under the first bucket. Are you done with this? So this means that any
data, any folder, any file inside this bucket would be uh would come under this policy. So this policy I'm going to
create right now will apply to all the objects which I'll be putting inside my first
bucket. Are we done with this? Okay. Go ahead and click on add ARNs. Add ARNs.
And you will see that this under the object it shows you a single line where it shows that this is
the bucket name forward slash us a wild card notation which is uh asterx. This means that now this policy is applicable
to all the objects or folders inside this bucket. Done. So to reiterate what we have done,
we chose list and read actions. All the list and get read actions over here. And then what we do what we did those
list and read actions will be applicable to the bucket which ends with first and inside the bucket it is applicable
to all its objects. So the list and get actions or read actions is applicable to my first bucket
and all the object that it may contain. Okay, we're clear about this thing. What's next? Can you see there's there's
an option out there which says add more permissions. You have to click on that.
Click on add more permissions. Okay. Once you on add more permissions, it
says select a service. I click on select a service and which service will be selecting now S3.
Okay, click on S3. Let me know once once you're on this page.
Click on add more permissions and choose S3. Add more permissions and choose S3.
Okay. Now we should be targeting the second bucket. For the second bucket we need to allow the complete access.
Fine. So over here under the access level or under the actions what should what we should be selecting
all. Okay. So you have to choose all S3 actions. So if you choose all S3 actions
automatically all the actions get selected please do that let me know what's done
okay so I'm saying that for the second bucket I want to include all the actions whether it's list read write permissions
management tagging nothing is being uh excluded fine what's next I scroll down and now
as challenge, you have to go ahead and specify the bucket ARN for the second bucket. Go ahead and as a challenge, you
have to add the ARN for the second bucket. I'll show you the solution afterwards, but I'll give you a minute
to try it on your own. If you're confused, that's okay. After a minute, I'll show you exactly how to do it
because you have done this in the previous step for the first bucket. Just try it on your own for the second bucket
as a challenge. So in this case I click on add arn right of the bucket and I just go ahead and
copy and paste the second bucket name. Simple. Fine. I copy and paste the second bucket
name. I show one more time. I click on adds on the bucket. Under any bucket name I just
paste the bucket name. Automatically the other feeds are populated. You don't have to change
them. Click on add ARNs. Done. I give you few few more seconds just in
case if someone has not done that. Now scroll down as a challenge. Go ahead and apply the object. I'll give you a
minute. If you're unable to do it, that's okay. I'll show you. Go ahead and and put and perform the object
restrictions. Now you click on add ARN. Under any bucket name paste the second bucket name and under any object name
you have to check right. So under the any bucket name put the second bucket name under the
resource object name you have to choose all the objects. So you have to check it you have to check it. So I'm saying any
object under the second bucket shall be included as part of this policy. Are we done with this? Okay, go ahead and click
on add ARNs. Click on add ARNs and that's been this is done right.
Fine. Now we have targeted both the buckets.
Click on next. Okay. So on the next uh stuff it will show you exactly that uh what you have
you have to go to review and create. First thing first, you have to enter the policy name. I can just type in my for
example uh I put any random policy name. It doesn't matter. Uh just put a policy name and in the description you don't
have to type in anything. So basically just go ahead and uh enter a policy name. Please go and do that. Click on
next and enter a policy name. Right? You can choose any policy name. Doesn't it it doesn't matter. You can
just put in for example demo policy, my customer manage policy, my first policy. Put anything over there. Okay.
Once you go ahead and do that, scroll down and it will show you the list that you have selected. Okay. And
um scroll down and click on create policy at the bottom right hand corner. Can you
see that? So once you put the policy name, no need to add a description. It shows you the permissions that you have
selected. It's okay. This is uh permissions defined at the policy. That's it. You can click on edit and you
can change it. Uh you can make the corrections before you can create the policy. But we don't have to do it. Put
a policy name. No need to change the description or add description. It's purely your choice. We don't have to
change the policy. Simple go ahead and click on create policy. Okay. And your policy gets created. Please go ahead and
do that. on the top it will show you this thing this message the policy has been created right
so we have done we have performed two steps we have created two buckets and based on those two buckets we have
configured a customer manage policy fine are we all clear till till now we're on the same page
I'm saying in real time how we define the policies by using JSON code or using the console uh this thing that we're
doing right now it is it a replica of real time only. Okay. And using the JSON code uh the policy if you just go if you
click on view policy automatically generates the JSON code for you. If if I for example go to JSON version of it
this is the JSON code that it generates. So right now whatever we are doing it's real time only or it's the the
reflection of the real time things that we'll be doing. So if you just go to the JSON, click on
the policy, click on JSON on the right side of it, the policy that you have used, it automatically generated the the
JSON policy. Okay. So guys, what's the next thing we are going to do? The next thing that you're supposed to do is
you have to add a user. You have to create an IM user and you need to attach a customer match policy to the user,
right? So you have to go back to the IM dashboard. Go to users on the left hand side. Now you have three users in front
of you. Um uh you still guys you still have the users create. Uh I think I suppose no one has removed the user. You
still have the users. So yes we have the users listed out. Okay. Now what you can do is that you can pick any one of these
users. Right. For example, I click on my first user Chris. Fine. And
yeah. Okay. So, what do you show one more time? You have the list of users in front of you. Three users. Click on any
of one of these three users. You have to click on any one of these three users. You'll just go to the the users
dashboard. This will save our time because we already have create created these users. Is it done? Okay. So once I
click on any one of these three users I can see the the users details for example the summarization I can I can
see the ARN of this user even the user is uh user configuration has the Amazon resource name the date it was created
right now this user is a part of a group once you click on click on the specific user and you just check check out the
user details you you can see that there are a bunch of options laid out in front of you permission ations, groups, tags,
security credentials and access advisor. You have to go ahead and click on groups.
Okay, once you go to the groups, you have to choose a group name and
click on remove. So I'll show you. So I just u connect all these dots. You go to the user
details, go to groups and then you have to check the group. Once you check it, the next thing that
you have is you have to remove it. So I want the the intention is I want to remove this user from the group so that
there's no conflict in terms of policy because you remember I chose Amazon S3 only access as one of the policies. I
don't want this user to be be a part of any group as if now because I just want to add the customer manage policy that I
just got created. So you just have to go ahead and click on remove. It says remove user from the group
developers. Click on remove just done once you're done with this and you will uh this will prompt you uh this message
on the dashboard. This user does not belong to any groups. Are we all done with this?
You just pick uh any any user randomly from the list. So you can just pick any one of these users. Click on the the
user ID or the user name. You'll just go to the users dashboard and you can see the bunch of details for that specific
user. You have to go to groups and you will see that this user has a
user groups membership. You have to select this group and click on remove. It says that are you sure or you want to
remove this user from group called developers. Click on remove and then you'll get this prompt. This user
doesn't belong to any groups. Are we all done with this? Just have to do you have to perform this action only on one of
the three users. Don't perform on all the three users. Only one of the three users.
Right? Are we all done with this? Now left to the groups there's a there's a tab
that's called permissions. You have to go to permissions and then there's a there's an option on the right
hand side which says add permissions. So you have to go to permissions on the left hand side of the groups and within
permissions just on the right hand side there's an option that's called add permissions.
Once you go to add permissions now uh what you have to next is that you have to go to add permissions
and there's something called add permissions add permissions and create inline policy. Click on add permissions.
Okay. Once you go to add permissions the option on the right hand side is it says attach policies directly.
Okay. You have to click on that option attach policies directly. Basically it's we are saying that we
don't have to add the user to to a group. We are going to attach the policies directly. Please go ahead and
do that. Okay. Now once you go to attach policies directly it says all types. Okay. So what it says types. So in the
types you have to choose customer managed. Can you see that? Once you go to attach policy in the all types it's
you can see it says customer managed you have to select it please go and do that okay once you choose customer managed
you'll be able to see the policy that you uh created you have to select it so you have to choose customer managed and
you have to select so once you choose add permissions you choose attach the policies directly Okay. In the type you
choose customer managed and automatically the policy that you created will show up. Please go ahead
and do that. Done. Okay. So what I'm so what we have done is that we have removed the user from the group. So that
there's no conflict in terms of policies because in that polic we we gave the read only access. So we
want to make sure that uh there's no conflict in terms of policies. We remove the user from the group and they're
trying to add this custom manage policy, right? So what's next? You have to click on next. So it shows you that this is
your username Chris. Fine. And this is the permission you're going to add to it. Now you need to go
ahead and click on add permissions and the permission will be added. So now what you have what you can see that this
user we are working on is is being is added this customer manage policy. So we have gave this special access to this
this specific user. Right? Are we all done with this? Are we all done? So what's the next thing? The next thing
we're going to be doing is that we have to test the user. We have to go ahead and test the user, right? So, we have to
test the user access. So, basically, you have to log in on behalf of this user. Now, as a challenge, go ahead, use a
different browser or an incognito window and login on behalf of the user. Just type in ter
we uh if you remember the the group that we got created that group has the S3 read only access.
If you go to the group right now and go to permissions the permission that I I we added to that group was S3 read only
access. Okay. So to avoid uh uh I would say to make sure that there's no conflict because we created a special
policy for the S3 buckets and this gives S3 read only access to all the buckets to avoid the conflict to make sure that
there's no issues out there. That's why because sometimes see generally what happens is that when you apply the perm
the permissions at the user level and at the and the group level the user level permissions get high priority.
This means and technically this should be null and voided this one. Okay. So the user permissions gets gets high
priority just to make sure that uh uh sometimes uh we're not sure whether the user is a part of group or not just to
make sure that uh we don't make the users a part of any group to test it. We have removed that otherwise if you have
just not removed the group they that would not have caused any any issues for us.
Okay. Now guys as a challenge you have to go ahead and you have to login on behalf of this user process how to login
on behalf of this you of an IM user you can use an incognito window a private window or maybe you can use um a
different browser because on the same browser session you can't have two login so in real time see it's not about real
time or unreal time if you want to see generally what happens is yeah in in generally if you if you add the you add
a special permission to a user whether it's real time or it's a it's a testing environment the user permissions have
have always the upper hand high priority fine so it all depends whether you want to make the the user the part of a group
or not but yes you can you can go with any one of those two approaches either you remove the if either you remove the
user from the group itself if you don't want to add those permissions or you add the permissions above it both these two
approaches will work. Okay. Okay. Now I'll just go ahead and login on behalf of the user. So I go to my dashboard. On
the right hand side of this dashboard I just copy this sign in URL and I just go ahead and use Google Chrome uh different
browser to just log in. So guys as as a takeaway an important point when you have the user being
applied the permissions at the group level and you specially assign different permissions or any policy to the user
separately the user permissions will have have always have the upper hand or higher priority but still you have try
to uh ensure that you don't have any conflict between these two different uh permissions group permissions and user
permission. But the user permissions are applied first. Okay. Now I just go ahead and
just log in and uh just try to log in on behalf of this user and see that uh yeah I just do that. I just log in on behalf
of this user. Okay. I'm sorry I I put uh I put the wrong user. Hang on one second. So I just want to change the
user ID. Yeah. So I just put in Chris in this case. Right. Click on sign in and uh
I just go ahead. So I' I've logged in on behalf of this Chris uh this username as Chris. Now as
a user if you just try to navigate to any other service which you you haven't got access to. For example uh this user
goes to EC2 service automatically you'll get the API error message. Okay. uh the reason being because we haven't given
access to any other service apart from S3. So if I go to the homepage on behalf of
this user and I choose S3 so the user will will be able to see two buckets
in front of him. Um of course I have all the buckets so I'll just filter out these two buckets only so that there's
no confusion. Right? So these two buckets are there. You'll be able to list down these two buckets. Now what
you have to do is once you're on the buckets list click on the first bucket. Okay, let let me know
once you once you are on the buckets dashboard on IM users page you have to login you
have to login on behalf of IM user now sometimes people get confused they switch back to root account and they try
to do the testing I've seen this thing right you have to log to the IM dashboard IM users dashboard and from
there you have to see a list of buckets you have to access the list of buckets on the users dashboard not the root
accounts dashboard mode. So you need to access the list of buckets
on S3 uh or I would say S3 buckets on behalf.
So basically on behalf of an of an IM user, you have to access this bucket list. Right
now you need to click on the first bucket the buckets with which which ends with
first or o right once you go to the first bucket you will see that on the right hand side there's an option that's
called upload okay click on upload
and then click on add files upload import and add files. So for example, I click on add files over here.
Okay. Now you can choose any file from your computer. Okay. So on the bucket list, you just
click on upload. Choose add files. Add files. You need to just pick any file from your computer.
You need to pick any file from on your computer. Right. Go ahead and just upload it. Now,
automatically the file name shows up that this file you're trying to upload. Fine.
You simply scroll down and click on upload. Now there now there there are two ways to upload. Either you just
upload it right like this, right? Or you just drag and drop the item. Now you will see that it will show you access
denied. It will show you access denied. Fine. Because for the first bucket I don't
have any I don't have any permissions or what I can do is that I can drag and drop the item like this. I can drag and
drop the item to the list and click on upload. Try to upload anything from your from
your machine to the first bucket. Please test it for the first bucket. The upload was was done successfully or can you
verify that upload should be should fail for the first bucket. Make sure that you're not
doing that on the root account because this is a very common scenario that people uh fall into. Right? Make sure
that you loging on behalf of an IM user. If you see at the top right hand side, you will see that it shows you the
account ID and the IM user. So on behalf of the IM user, if you access the first bucket and try uploading something, it
should reject you. Okay, that's what we want to ensure. So if a if a plate is successful there are two chances. Number
one maybe it is trying to do that on the root account. Number two the policy that you configured was not done properly.
Okay what's next? You need to go to the second bucket. You just click on the
buckets on the on the top and of course you'll be going back to the bucket list. Click on the second bucket and now try
to upload something. I click on upload and now it will be a success for the second bucket. You can
just access the bucket list. If you just click on close right now, you can just click on the buckets on the top and you
will be back to the buckets page. So for the second bucket I was trying uh when I tried to upload this simple file
this small file to the bucket it was a success because based on the policy based on the
policy the user have no access in terms of uh deleting
or I would say putting something inside the inside first bucket. But for the second bucket, the user has complete
access. Right? Now, just to test it further, you can just click on delete.
Right? And you if you just try to delete the item or uh the object from the from the
second bucket, it should be successful. Just try to delete something from delete the same object from just type in
permanently delete and click on delete objects. You should be able to delete your object or data from the bucket
itself. Okay. So overall on behalf of this user you logged in
you try to upload something to the first bucket you you got that uh which was expected. You try to perform the same
operation for the second bucket and it went uh with success and you were also able to successfully
delete the same object that you uploaded before. So this puts our policy into test. We
saw its effectiveness that how we configured the policy and how it can come into effect. Okay, let's discuss
the concept of IM rules. Why exactly we need rules? Because see the thing is that users groups roles and policies
these are different components of IM purpose of uh users groups and policies. But what's the what is the usage of
rules to perform confined permissions? Okay uh confined permissions. All right. One
specific access. All right. Who wants to uh give inputs? See roles are used for uh temporary access. That's a good
thing. That's uh that uh was I was looking for temporary access but I elaborate this temporary access in
uh to ensure that you understand exactly what you mean by temporary access but that's a good point we want to give
access for specific task only okay so we need rules let me just elaborate what what the
rules are and um I'll discuss with you two benefits of rules it's because the rules can be used for now for exams um
included one answer temporary access. Uh basically the thing is that rules uses something called security token services
which give you temporary access for a specified duration. I'll come to that and uh in AWS we use rules for multiple
reasons but there are two main reasons that we're going to be using rules for. I'll elaborate those uh those uh those
two different reasons, give you some examples and based on one of the examples we do do some hands on so that
you you clearly understand the relevance of roles. Okay. Um roles are required in those cases when
you need to engage two resources together and have them communicate to each other. See right now um or let me
just give give you one example. We discussed the IM users right? Who are the IM users? IM users are the people
they are the human beings. They are the the IT people, the IT software guys who will be interacting with with resources
like instances, S3 buckets, right? uh they will be interacting with resources like S3 buckets, ED instances,
low passes, autoscaling. They will do the deployments on daily basis. They will provision those resources,
configure them, terminate them when when they're not required. So we as as the users, we interact with these different
services, right? For example, we discussed the concept we we we focused on the scenario of of of an IM user. Now
as an IM user uh for example Steve needs to communicate with two buckets and based on uh this customized policy he
should get reach only access to bucket one and complete access to bucket two right so as an IM user it was uh it was
uh his responsibility to to communicate with the resources and perform his daily job and based on whatever permissions
I've assigned to him he will get that access and he will he can perform those actions. So over here this there's an IM
user who is who is trying to interact with the AWS resources which includes my buckets. Okay. Now what I say that now I
don't want to use this user in this case. Okay. User. Okay. Let's suppose user is there. Let's imagine user is
there. I also want the buckets to communicate right now or the scenario that we saw
user communicate with the buckets. Okay. But I want the bucket for example bucket one to communicate with bucket two.
I want bucket one to communicate with bucket two. There should be a kind of a communication or synchronization.
So in this case this bucket one will need an IM role so that it can communicate with the second bucket.
The the main purpose of an IM role is to initiate resource resource access. If one bucket wants to communicate with
another bucket, if instance wants to communicate with the bucket, this means that in this case
there's no human being involved. Right. So in this case we use the concept of IM role. Right. So the
purpose of the IM role I'll just list down the the uh the functions. I'll give you some examples. Try to understand
those examples. And the best part about the IM rules is that um when when you start configuring the resources the
rules are configured automatically. Okay. You don't have to create them at every single point of time. But of
course from the exam point of view, from the certification point of view, from the interview point of view, from from
enhancing your knowledge perspective, it is important that you understand its relevance. Okay. Um
let's understand the concept of IM rule. Okay. So the purpose of the IM rules, there are two main reasons why we use
the rules. One is that it facilitates resource resource interaction. It facilitates resource to resource
interaction. If resource resource means for example the resources or the services that we use in AWS S3 buckets
load balances autoscan groups uh database instances instances whatever services we are using these are
resources only right it facilitates resource resource interaction. Second thing is that it is also used something
called for cross account access. Now this is out of the scope of the course right but I'll give you the
example of it I'll give the uh we'll discuss we'll do the hands-on we'll just do the hands-on on this one right this
is out of scope of the because this is very advanced topic but I'll give the example of it we don't do the hands on
this one we'll do the hands on the first use case so I'll give the examples of both of
them let's come to the first main point It facilitates resource resource inter
interaction. So this is a scenario that you need to understand that how it provides resource
to resource interaction. Let's try to understand this. Resource resource interaction means that
in AWS when there are two resources that need to communicate for any reason maybe it
is synchronization passing out the data uh maybe for example giving access to a service to stop another service or send
the locks for that service or maybe for example have that service uh config changed automatically by other
service in this is we call it as resource to resource interaction. I'll give one example over here. The example
is let's imagine I have these end users. These are my customers. These customers they
they upload their images these customers they upload their images
to a web application. For example, you have deployed a Python web application
right on an EC2 instance. Fine. So the users upload the images to uh the Python app running a bomb
instance and the system is trying to push those images and store them inside an S3 bucket. Now
we already discussed what what a list bucket is an S3 bucket is is nothing but it is a fundamental container. It's an
it's an online backup service which can contain your data. Right? So this is for example my Amazon.
Okay. Now I want the instance to um to put those images to be stored inside this bucket.
Okay. So users are trying to upload some images. You must have uh must have seen whenever try to uh uh access uh a PBT or
PDF on Intellipad website they are being they are being extracted from the bucket only. Okay. So generally what happens is
that the companies uh they store the content in in in these buckets right so as a user you're just uploading some
images for example uh you're trying to interact with uh a social media platform like Instagram or Facebook something
like that and uh those images are being are being stored inside the bucket. Now um for this type of it so over here
there there two resources which are interacting okay and and if the instance has to get the images out of it download
the images then also the instance needs access. So over here there are two resources
communicating which are the two resources this one and this one.
Right now instance is trying to communicate with the bucket so that it it has the ability to put or get the
images out of it. It has ability to put or get the images out out of it. For this reason, I have to
create an AM role, right? Um, attach a policy to it. For example, the policy is Amazon
S3 full access, right? I attach a policy to it to the instance. So I I attach a policy to I
just link the policy like like we link the policy with the users and groups. Similarly link or attach a policy with
the role and then attach this role with the instance to the instance. So once I attach a role with a policy uh which
gives complete access to the S3 buckets this will have the complete access to to put or get the images from S2 buckets.
So over here who is getting permissions is is this a user? No, it is the instance which is getting the
permissions. If if there there's no instance over here if there's a user then I I could
have created an IM user in this case. Right? Because there's no user involved. There's no human being involved or a
person involved. This instance is a resource which wants to communicate with another
resource which is an S3 bucket. For this type of interaction uh or communication or access the
instance need permissions and for that I create a role attach a policy to it and then link this role to the instance so
that the the instance has those permissions to apply some actions or do some actions for towards the bucket.
Okay, I'll give you I'll give you another example. Let's imagine this is my bucket in North Victoria.
This is my main bucket. Okay. I want that whatever data is being put inside this main bucket, it's being
synchronized or it it is it is replicated or duplicated
in a different bucket which may be in a different region. For example, for the backup purpose to make my my data more
redundant, I have another bucket in Mumbai. Okay. I want that the entire data gets
synchronized and replicated from main bucket and this is my secondary bucket. This is my uh backup
bucket. This is only for the backup purpose backup or this is a secondary bucket for me. So I want to make sure
that I have a copy of the data synced and and saved in Mumbai as well. In that case this main bucket needs to be
attached an IM role. If I don't attach an IM role to it, I can't have its data synchronized to the second bucket. So in
order for my main bucket to sync its data or synchronize its data to the second bucket to another bucket in a
different region, this main bucket also needs needs certain access. Fine. So what is the moral of the story?
What's the crux of this entire conversation? rows are assigned to the resources
so that they can easily communicate with each other to uh and we allow them to have certain level of access
whether it is it is replication of the data or it is doing some actions for example putting the images or getting
the images fine I think this point is clear to you we'll do okay so we'll just do a
hands-on on on this scenario. Fine. Now let's uh take something called cross account access.
Now cross account access is out of scope of this this course. We'll not do the hands on this one. Okay. Because it's
very advanced. It is discussed in sops or professional examination. But for the sake of giving the example because you
understand why we use the the IM rules for in the cross account access. What we do is that for example um let's let's
take take one example what what do you mean cross account access when you start working for organizations
for for for large enterprises uh which have hundreds of thousands of of employees uh working on AWS you will
find out they will not have only one root account they will have multiple root accounts they will have multiple
root accounts for example um let's let's take one example over here. Let's imagine there's a company named as
SpaceX, right? SpaceX is a is a esteemed organization which u specializes in uh
space exploration. Now SpaceX says I will go for two root accounts.
One root account is for production live servers. So this is my root account with this purely for production
purposes. Okay. This is where um so in the root account this is where live applications
are maintained. This is where live applications are
maintained. You're right. Your live servers are running which are in production. Okay.
Then SpaceX says I have a different root account for development purpose development and testing. So we call this
root account as uh development. This is uh a second root account we have. The name of the root account is
called development. Now this term root account is kind of a sandbox.
Sandbox means that it is used for for development and testing of applications. Used
for development and
testing, right? So for for development and testing of new applications, new
services, new software releases, it's used fine. So SpaceX is as a as an organization has divided its uh I would
say it has adopted a strategy where one root account is for production the live servers are running and root account for
development is only for development purpose and testing purpose. Right now under the production so let me just
divide this thing so that there's no confusion under the production account I have a
bucket an Amazon S3 bucket. Now this S3 bucket is used as a code repository. code repository means that this is where
um why I would say that my application's code is is updated right so it consists of source bundles
or packages HTML files CSS files and uh if I if you want to update my application I need to update my code
repository right so it is kind of a GitHub or where my uh my software packages are maintained And
I have an IM user over here under the developer account. So for example, this is an IM user,
right? And this IM user um is my software developer. His name is Steve. I want Steve
to get access to this bucket so that he can update the applications, update the code, push the new changes to
it. Now the problem is that this is not allowed by by by itself. Even though these two accounts are um
these two accounts belong to the same organization which is SpaceX but these are two
different root accounts. From AD's perspective these are these are different entities.
These are completely different entities. from the Amazon's perspective this these are different accounts even though they
belong to same organization that's okay so this is not allowed by default right one root account cannot be
straight away start accessing the the other root account resources so in this case what needs to be done in
this case uh there's a there's an IM role needs to be created
there's an AM role needs to be created, right? And this person Steve would be using this role. Basically, there's a
there's a there's a special uh action which we we call it as switch role. If I show you uh on
Okay, it is it is under the uh under the IM account. Let me show you that if I log in on behalf of an IM user,
you'll see there's there's an option called switch role. Let me show you that.
Okay. So, I just go ahead and just paste it. Okay. I'll just try to login on behalf
of an IM user. You will see that once I just log in right now uh can you see that at the top
right hand corner from where I can I can log out uh there's something called switch ro can you see that switch role
now using switch ro option if I have the arrangements already made at the back end this IM user can log into a
different root account and access some resources switch role
okay so using this this uh special option that's called switch ro the IM user will be using this IM role
and then we'll get access to the S3 bucket fine this IM role will have some policies with for example we'll have uh
policy signed for example the policy is uh Amazon
S3 full access. Okay. And this IM user will use switch role action
use this role and hence get access to the S2 packet. Fine. So
in order to enable the cross account access you also need the roles. So there are two main reasons when why
we use the role. Number one to enable communication between two different resources and number two to allow a
different root account user to access your resources and uh because this is cross account
access and in this case the roles are needed and a special uh option is use switch role based on which the IM user
can access your resources based based on what you have what access you've given to uh what what access you've attached
to that role. So the the external user because this IM user is an external user. Now it's not my my my account user
that external user can access your your resources based upon the role that you have set in for that person to use. What
are the policies attached to that role? So if I was attached the policy which is Amazon S3 full access then the the the
external user should be able to get access full access to SD packets. Okay,
any questions you have? Now at the back end, what happens is rules uh generate something called uh tokens.
Uh by default tokens are valid for 1 hour. This doesn't mean that the rule is valid for only 1 hour but at the back
end they generate some tokens uh and we call it as a security token service or STS. So these tokens are generated at
the back end. These are temporary tokens which are generated every 1 hour. Now this is the DI exercise. This means
that uh with minimum support you can just follow these steps. Now I'll walk you through the steps. They they're very
easy. I don't think there would be any challenge uh that you'll be facing uh in terms of connectivity and uh and
accessing uh the or I would say performing this complete hands-on and also you'll be see uh you'll understand
the the CLI part of AWS. So right now we are trying to access or communicate with the sources using
this graphical user interface fine this web page but we'll also see the command line interface version of of
AWS as well. So using CLI we can also interact with the resources. So this will also touch upon the CLI part of
AWS. Okay. So if you go back the document this is what the document is. I'm going
to just copy the and paste its contents one more time just in case if you missed it before. So the very first step that
uh it says that create an IM role so that the EC2 can get full access to S2 packets. Okay. So in this case I'll show
you the process how to create an IM role. It is quite easy. I'll be a bit slow in terms of showing you this this
uh this step and uh you have to follow along with me. So as I show you and explain you, you you have to do the
steps at the same time. Okay? Because you don't have the screenshot space document to uh follow along. Uh now I
create these DY exercise so that you be pushed to perform the steps by yourself without the help of any document
because ultimately you have to uh start executing these steps without without any help or with least help. So in this
case you have to understand that how you need to navigate to different services and start applying those different uh
concepts that are important. All right. Now um having said that I just go back to my uh browser and you have to go to
the IM first. Right. So you have to navigate to AM. Now this this page we have accessed multiple times before. you
have to go to IM and under IM you have to search for roles. Now if you just go and just look for it you can easily find
it out. Uh if you uh for example just go and see over here under IM resources where you access all the sources you
have a separate section for roles or you can find roles on the left hand side. Okay you have to go to IM and then you
have to navigate to roles page. Click on the roles. You have to navigate to roles page. Go to IM. Go to roles. Be um uh
you trying to login on behalf of of an IM user. Okay. So you have to login as a root user. So just uh keep an eye on the
right hand side. Make sure that it only shows your root account ID. If you're showing you an IM user, you have to sign
out and login on behalf of a root user as you were selling it before. Guys, you have to go to IM dashboard and then you
have to switch to roles. Go to roles. Uh once you go to IM dashboard, go to roles. What's next?
Once you go to roles uh uh once you go to roles right now uh you will see that you sometimes uh you will have some
roles created automatically uh because whenever you configured some of the resources the roles are some roles are
created automatically without even you noticing that but that's not what exactly need to see uh hang one second
to read this rule what you have to do is that you have to create a new role from beginning so that
the instance can access the buckets. Now we are trying to uh apply the the first principle or the first example that I
showed you and the example that I showed you is where uh the instance uh should have access to access these
two buckets. This example tried to apply. Now
if you just go to the this instance dash uh this rules dashboard I'm sorry and uh you can see that an IM role is an
entity identity that you create and has specific permissions with conditions that are valid for short durations. Now
this is a temporary access doesn't means that it is it only work for some time but at the back end it generates some
tokens which are val for an hour by default. Um now you can just click on create role
and you'll come to this uh this page which says selected trusted entity right make sure that you choose AWS service
which is selected by default okay because we are trying to have the services or resources communicate
and under the use case you have to make sure that you select EC2 this one okay so go ahead click on create role
And once you click on create role, make sure that the trusted entity type that you have selected is AWS service and the
use case that you have selected is EC2. Please make these two selections. Let me know once this.
Now this role is created so that the the EC2 instances can have access towards ES2 packets. Please perform these two uh
please make these two selections. Let me know with this. Done. We're done with this.
So we have selected the trusted entry type as AWS service and the use case that we're selecting is
the EC2. So because we want this role to be created for the EC2 service so that instances can access this buckets. Now
once we do that we click on next. Now we have the list of policies. The policy that we need we are looking for
is S3 full access. Just type in S3 F Ul. Press enter. And have you you have to select Amazon S3 full access. Please
go ahead and do that. So to this role that we're going to create right now the um the policy that we need to select is
Amazon S3 full access. Please make the selection. Okay. What's next? You just go ahead and
click on next. Okay. Then it will ask you for the role name. What's the name of the role? So
basically you can just put in any role name put any name even you put your first name that's okay enter any um role
name that you can remember afterwards it's a simple because I have let's suppose I have put the role name as
Amazon east to S3 access IM role you can put any role name over there just go and do that enter any
uh randomly just assign any name to it please do that enter any meaningful name now once you are done with this uh
assign running this uh name there's a description you don't have to change the description uh you will see that it says
trusted entities now basically if you see it says effect allow action STS assume role now SS stand for security
token service um at the back end this role will will generate some tokens which expire after some time but these
tokens are generated so that uh uh the instances can use those tokens to access the services and the services EC2
amazon.com. So it will generate uh this it will use the so it will use the concept of
security token service or STS to generate some temporary credentials or tokens for my uh for my EC2 service. And
if I scroll down it says that what what is the policy that attached to this role Amazon S3 full access that we selected
in the the previous step. You don't have to change any one of these entities on this on this page. The main thing that
you have to change is the or send in the role name. Don't change the description. Uh the selected trusted entities is the
same because it says that this has been created for the EC2 service. So the tokens the temporary credentials which
reted for EC2 service and EC2 service will get the complete access to my S3 service.
Click on create role and your role gets created. Please go ahead and do that. After you name the role, you just have
to just scroll down. Just see the information just for the sake of just going through the information. See that
and just understand what exactly the information means. Go ahead and click on create rule. Your rule gets created.
Please go ahead and do that. So once the the rule gets created uh we'll just do the next step that is
launching the instance. So this rule has to apply to the instance at the time of launching the instance. Okay, this rules
will not this role will not automatically apply to all the instances. When you deploy the instance
you have to explicitly exclusively apply that role to it. In the next step, in the next step, uh
we'll deploy the instance and apply the role, log into the instance and test it. So the next step is that we have to
launch instance with all the basic settings and the only thing that we have to do is
that we have to include the advanced details which is the selected the IM test profile which is an IM role. Okay.
So what you supposed to do? you have to switch back to your um IM account. You have to stay on the root account, right?
So this this entire process has to be done in the root account only. So within the root account, I just switch to EC2
dashboard. Just do that. Okay. Uh once I go to the E2 dashboard, I go to launchance. Click on launch test
one more time. Right. and I uh assign a name or a tag to the instance. Fine. So in this case,
what I will do next is that I will just go ahead and uh I will just go ahead and assign a name
to this instance. In terms of application and OS images, I don't have to change it. Keep it as it
is Amazon Linux AMI 2023 AMI. Right? It's a default EMI default is this type but is T2.m micro right so there's no
need to change it under the key pair you can create new key pair or any of the existing one okay um so for example
I can I can click on create new key pair and uh I just type in for example IM ro
demo- kp okay I just put a simple uh key pair name and click on create key
pair. Okay. As if not now perform these three actions only.
Initiate launching the instance. Fine. You have to initiate launching the instance. Go with the default AMI
default instance type and create a new key here. I would advise you to create a new key pair.
Sometimes um we tend to delete the private keys from our computer. So please go ahead and do that. You have to
launch instance. initiated launching instance using default EMI. Default instance type and create a new key pair.
When you initiate launching instance, make sure that the EMI is Amazon Linux 203.
Okay, it's default one. We don't have to change it. When we go with this test type, it is T2.micro.
We don't have to change it. Keep it as it is. Under keeper login, you have to click on create new key here. Okay. Um
you don't have to change any of the details. For example, I click on create new key pair. Enter a name. Simple enter
a name and click on create key. You don't have to change the RSA or I would say the key pair type or private key
file format. Don't change any one of the things. Simply assign a key pair name and click on create key pair. Fine. So
you don't have to change uh the key pair type keep it as it is as RSA right don't change anything over
there I would say don't change anything in in uh the options below enter name of the key and straight away click on
create key pair please do that let me know it's done okay what's next once you click on create key pair um right so it
will it will download the key then under the network settings you check all the boxes
Basically it says that um allow SS HTTP from anywhere. If you want you can check all the boxes. Please go ahead and check
all the boxes under uh this thing security group. Please make sure that you don't change the source. Don't
change it. Just tick all the three boxes. Right there. Three ticks. One, two, and three. Allow SSH HTTP and HTTPS
from all the sources. Please go ahead and do that. What's next? Let's do the next thing. If you scroll
down, there's something called advanced details. If you remember under the under the advanced details, we go to user data
and basel script. If you remember, right now you have to go to advanced details. All right, expand it. And under advanced
details, you have to look for a third option which is the imance profile. Okay, the IM instance profile is nothing
but it's a role assigned to the instance. It's a role that assigned to the
instance. So under the IM instance profile, you have to look for uh the the role that you created and applied
so that this instance gets the permissions to access the buckets based on the role attached to it. Please go
ahead and do that. So under the advanced details you will see that you have the third option out there which you have to
look for which is the IM instance profile and apply the so in the drop-down menu select the role and once
you assign the role straight away click on launch assistance once you assign the role straight away
click on launch assistance go ahead and do that let know once you apply the role please go straight away
and assign the uh assign once also sign the role. After you sign the rule, go straight and click on launch instance.
Instance launched. Fine. Simple. So what we have done, what new thing we have seen? We have seen that how to uh apply
the role to the instance. Let's wait for a few seconds. Um and we just connect to the instance. Let's wait for a few
seconds over here. Jesus have been done. Once done, now what's the next thing? The next thing is that you have to
connect to the instance via SSH. So over here if I take you back to the this thing it says that step number three
connect to the instance via SSH. You have to connect to the instance. Now how you go about doing that? You select the
instance as it is, right? You go ahead and and make a selection, right? So basically this is the the browser based
shell promp that we use. We can use puty as well if you are comfortable using putty. You can use terminal as well if
you're comfortable using terminal. So in this case we'll be using the browser based shell prompt. We have to
select the instance. There's a check mark. Select the instance. On the top there's an option that's called connect.
Click on connect. And uh uh you will see that it says east is connect. connect to is test connect right
uh you have to straight away click on connect over here and it will be invoking the browser based shell prompt
to connect to this test please go ahead and connect to this test step done okay connected
all right what's next the next thing that you're supposed to do is uh you need to run the following command
to list the buckets. I'm sorry. Run the following. Okay. What's the next thing? The next
thing is that you have to run this command to list the bucket. See the thing is that we use we have used the
Amazon Linux AMI uh as a AMI to deploy instance. This Amazon Linux AMI already has the uh CLI
package installed. CLI stand for command line interface. Now you can run the commands from this tense to list your
buckets. Now these commands start from AWS. Then give a space
then type the name of the service which is S3. Give a space and what actually want to perform? I'll type ls. LS stand
for listing AWS S3 LS. This will show me a list of buckets in my account. Press enter and
you'll be see you'll be seeing a list of buckets. Please go ahead and do that. Run the command to list buckets right
now. So once you log into your uh your instances
shell prompt or you log into the instance by SSH type AS
space S3 space LS press enter you'll see a list of buckets in front of
you. If you don't see the list of buckets, this means that you must have removed
the buckets uh in your uh in your account. That could be the reason. Fine. Please go ahead and do that.
It will show you a list of buckets that you have created. See whatever it is showing you, it will show you the list
of bucket. Whatever the you must have removed the buckets. That could be the reason. The next thing is that you have
to enter a bucket name. I would say you have to run this command to create a bucket.
Okay. So in this case you have to enter this command AWS space S3 space MB. MB stands for make bucket.
What was the previous command we typed in? AWS S3 LS. LS will list the buckets. LS stand for listing.
LS stand for LS is for listing right. MB stand for MB is it is make bucket make my bucket. So this is to
create a bucket. So A S3 MB space you have to enter the bucket name. To enter the bucket name just type in S3
colon two forward slashes. Okay, just uh type this portion of the command right now. Type type this
portion of the command. AWS S3 MB space S3 col2 forward slashes. Type this portion of the command right
now. Let me down. Now you have to think of a unique bucket name. Unique means uh we discussed that no two buckets can
have the same name. For for example, just type in Rohan uh Intel uh im ro S3
bucket right something like that you have to think of a unique name now for example um you can just type in your for
example I give sometimes okay so just type in your first name for example right first name uh hyphen uh im roy
s3 bucket now this is this will be this will be unique this will not clash with my name uh with my bucket name because
your first name is different from mine And you can just use this this um kind of uh bucket name. Press enter. And it
says make bucket. The buckets created. Please go ahead and do that. You have just executed the command to
create a bucket from the instances shell prompt using the CLI commands. Are we done with this? I'm not able to put
commands. Or you can type in AWS space help for example and type in uh AWS S3 help. Okay. So for example you want to
see the list of commands for the S3 bucket. Just type in AWS S3 help. I can see all the uh all the commands
for the S3 bucket. For example you can see that CP for copy MV for moving RM for removing. Right? So I can see the
list of commands for example uh sync make bucket MB remove bucket RB ls for listing and for every single thing they
have they have given the uh the documentation right you you will see that there will
be documentation available for all of them right you can see that synopsis options
available commands and for the available commands they will give you the the complete description for example
let's control set for example AWS S3 U ls of let's suppose MB help so I can see exactly that what this MB
means fine if I've seen uh the MB command MB creates an S3 bucket synopsis MB uh what are the arguments that that I
need to pass and what's the format of the command so this is where I can see all the the documentation
fine Okay, you can see that this is the the example they they have used. All right, now what's the next thing? Now
you have just created the bucket right now. You have just created the bucket. Now to extract the list of buckets again
just type in AWS space s3 space ls. Now you'll be able to find the same bucket that you have just created. Now I have a
list of uh a lot of buckets. Just type in as s3 ls. I'll run a grip command to because the there's a I have more than
30 buckets right now. Rohan Intel. Yeah, you can see that this is the uh this is a bucket that I have just
created. Now let's do the let's reverse this thing entire thing. The next thing that we do is that we will remove this
role from instance. I've listed down the the steps that you have to do. You have to remove the role
from instance. Now, how to remove the rule? I'll show you. Go back to test dashboard.
Right? Go back to the instance dashboard. Right? Uh once you go back to instance
dashboard and look for the instance that you have launched, okay, please go back to instance dashboard. Select the
instance that you that you're working on. Please go ahead and do that. I'll go a little bit slow. Go back to test
dashboard and select the instance that you're working on. Choose it. Just type in done. Once you have done
this thing, go back to the test dashboard and select the instance that you're right now working on.
Right? You have to switch back to the test dashboard. Check or select the instance that you're
still working on that we have already connected to. Now
you have to go to actions on the top right. Once you go to actions you have to go to security under actions
and you have to look for modifi and I have to click on modify IM role. After select instance go to actions
then go to security and under security I have to click on modify IM role. Select instance
actions security modify IM role. Now once you click on modify IM role right it will show you the role
you have to choose no IM role and click right choose no IM role. Click on update IM role and this just confirm
detach and press detach. I've already listed down the steps.
You have to detach it. Choose no IM role. Choose update and detach. Just tap it done. Once you have done
this thing okay basically once you go to actions security modify IM role and you choose no IM role click on update right.
So basically there's an you have to type in detach in the box. Once you click on detach, then the the detach becomes
enabled. Let me show you that one more time. I choose it. Go to actions, security,
modify IM role, choose no IM role and click on update. Now detach is disabled. It says to confirm detachment, enter
detach in the field. Let's type in detach. Once it is typing detach, the detach
option gets highlighted and I click on detach. Okay, are we all done with this? Okay, now as a challenge,
you have to go back to the shell prompt and try listing the buckets. Now, as a challenge, if you're already connected,
if you're already connected to the uh browser based shell prompt, that's okay. or or you can reconnect and
try listing the buckets and let me know what you're getting in return. Now you have to go back to the shell prompt or
reconnect to the instance. Go ahead and try list the buckets. Basically the thing is that
when you remove the role from the instance the instance has lost the ability or the credentials or the
authority or the access to access buckets. When the instance has a rule attached to
it, it has all the privileges. You were able to from the instance uh shell prompt, you were able to uh you
were able to create the buckets, make the buckets uh I would say list the buckets. But once you remove the role,
after remove the role, the instance has lost the access. It has lost the privilege. Now if you
want to test it further if you if you if you connect the role again the instance will be able to perform the same actions
but the moral of the story is the crux of the entire conversation is that once the role is removed the instance loses
access right so the role is used in this case so that the instance has the privilege or the authority to access the
buckets but once you remove that role from the instance the instance loses the access
is this clear to you now fine now if you want to test it further for example if you want now this is
option I'm not uh pressing you to do it uh if you go to the same option and attach the role again right if you go to
the same option and you uh same option and you try to attach the role again for example I go to my statuses selected
uh uh select instance actions security modify m ro rule I choose the same IM role click on update right I have to
subdate it right now now I list the buckets it's working now
once you um attach the rule you get the same access back yes that's what's kept under security fine that's
it there's only one thing where we left out is multiffactor authentication what do you mean by multiffactor
authentication what exactly it's meant by what do you mean by that and what's the purpose of it, why we use the
multiffactor authentication as uh one of the methods to authenticate. Now you must have seen this thing that nowadays
we uh are going towards two factor authentication. What do you mean two factor factor authentication? Uh
whenever you make any banking transactions or you sign into uh any of your accounts uh you know you're being
presented with an OTP right and onetime password. So that one time password ensures that even if you for example you
have uh signed into any of your applications or you just trying to make any banking transaction by going to a
banking website or to complete a transaction uh you have to punch in the OTP right one time password. So why you
have to uh why why this this two-actor authentication is becoming very popular or it's very ubiquitous. The reason
being because it enhances security. It becomes nowadays because of uh the sophisticated tools um and a lot of
things that are going on. It becomes your passwords can be very very easily compromised. Right? Passwords can be
compromised. Usernet and passwords can be easily stolen can be compromised. But OTP is is a kind of thing which cannot
be guessed beforehand because any OTP that's generated it's valid for a few minutes and if you sign into the same
application again let's suppose after uh 24 hours you have to make use of a new OTP right so the onetime password apart
from putting in the ID and password to make any online transactions backing transactions
um or log into any of your accounts ensure that you have the double the amount of security that you have. Now
this this OTB type authentication has been used in businesses for from past more than a decade. If you remember I'm
not sure uh uh if you have seen this thing this there's something called gameto key for or RSA key this is a key
that uh you you must have seen that some businesses still use even though it is now kind of becoming obsolete. Um
these are the keys that um people carry with them especially people who who work in IT software uh
departments um uh or teams uh these are the cards or keys that that are given to them secure secure ID key u these are
the keys that are given to them right so now again uh they uh are kind of becoming obsolete now it's they're not
being used so much with adv with the advent of advanced techn technologies. So these keys uh uh we were used to use
before the cards uh the these keys and cards used to uh generate a pin every 30 seconds and we used to use these these
pins uh generated by these keys or these secure ID cards so that um every time we sign in to to our to our business
applications we use a new pin. It's kind of OTP. It this these these these secure ID cards or these keys used to donate
the pins every 30 seconds. So uh for if you're working for for banking clients or for uh for security applications,
these keys were used. This key is called Gimoto key fob. The technical uh I would say the exact um we call it as RSA key
for the sake of convenience but it's called Gimato key fob. So this this schemaltto key fob is the secure ID
token which generates a pin every 30 seconds so that you can sign in security to your application. Right now AWS has
given you the same type of capability. You can buy these tokens and you can uh link it with your IM accounts or root
accounts. I think the cost of one unit is $15 to $20 US. And um um what happens is that either
you can buy these keys and link with your accounts. So what you mean by buying these keys and linking with the
account uh the reason uh or I would say what I want to say is that whenever you want to log into any of your account
whether it's a root account or it's a it's a it's an IM account IM user account um you can sync these keys with
with your signins. So every time you sign in by punching in your user ID and password, you also have to put in the
the PIN generated by these keys. Okay. Amazon has come up with with with another option with you uh for you.
Apart from using these keys which is still the the the viable option and or available option. You can also make use
of a smartphones or uh any of your mobile devices, your iPads, your tablets, uh your smartphones.
uh install the app upon them and uh those apps can generate these these keys randomly every 30 seconds right that
concept is called multiffactor authentication or MFA okay for example let let me show you uh a simple setup I
just go to IM right now and I just go to um I just go to the users list right I have three users
Chris now to enable the IM it's a one-time procedure one thing yeah so we have authenticator apps like Microsoft
authentic authenticator duo uh Google authenticator so there are lot of authenticator apps which are available
for free on the app store or the play store you can buy them uh uh you don't have to buy them I would say that just
subscribe for them and download them so there's no need to buy anything uh these are free software there are also some uh
options uh to to purchase but I think um the the options that are available as for free they are the best options
because Google authenticator Microsoft authenticator are widely used so let me just show you that okay now uh I'll show
that this thing for example let's imagine uh for example let's suppose this this is this iPad that uh I have
with me this belongs to this user named as Chris right so basically Basically the thing is that the the devices has to
be owned or I would say should be in the physical custody of users every time right so if Chris wants to sign in uh to
his AWS account um every time he will be having the device with him and uh the authenticator app installed on his
device will generate a pin based on which he'll sign in okay so let's imagine that this iPad belongs to this
user Chris and Chris says, "Okay, I want please enable or please make this device, this iPad as the MFA device."
Okay, this iPad is the MFA device for Chris. So, I need to perform activation. It's a one-time process. Chris uh and f
the details of his account and you will see that I have permissions, groups, tags, security credentials. So, I have u
a bunch of options out there. I go to security credentials right now. Okay. Once I go to security credentials, you
will see that under under this uh console sign in there's a multiffet authentication. Now you don't have to do
this this hands-on right now. So it says assign MF device for any user. I can link up to eight up to maximum eight MFA
devices. So I click on assign MF device. Now it says are you using an authenticator app uh a token a key what
are you using now the keys and total are the the same thing the which are which you can buy from um I'm talking about
these RSA keys right these RSA keys can be purchased uh these are the these are the secure ID tokens you can buy from
them so I'm not using any one of these options out there instead I'm using authenticator app installed on user's
iPad and name it as for example Chris iPad. This is the device name iPad Pro or so
this is the the the device name that I put in and I select the MFI device as the authenticator app because I'm using
uh authenticator app. This is called virtual MFA virtual multiffactor authentication.
Right? So once I put a device name I use authenticator app as my uh accepted method for enabling MFA. I click on
next. Okay, it says install a compatible app uh such as Google authenticator, do
mobile ori app. See a list of compatible applications. So it's showing me the the step-by-step process. So the very first
step is that I have to install an authenticator app on users device. So we go back to the to the users device. I
can just go to the app store. Now, of course, if you're using Android, it would be the Play Store that that you're
going to be using. So, I go to the App Store and I can find a bunch of options out there which are free and paid both,
but free are absolutely uh acceptable. They can be used. They have all the features. So, you don't have to purchase
any authenticator software. So, if I go for authenticator, I look for authenticator right now.
Yeah. So you can see that uh we have Microsoft uh Microsoft authenticator on the right hand side. This one um this is
also free of cost, widely used. The one that I'm using is Google authenticator. It is um free of cost and widely used.
So you can use any one of these two options. You have also authenticator app. This one as well. This is also I've
seen people use use them a lot to authenticate with some other uh applications but u for business point of
view Microsoft authenticator and Google authenticator are used a lot so personally I'm using Google
authenticator so imagine that this uh device belongs to user I installed uh authenticator app upon this device and
it says add code I click on add code I'll be clicking on the first option which says scan a QR code right so which
QR code is it's asking to scan I go to the second option and click on show QR code so using using the devices camera I
should be scanning this QR code so let's go I click on this scan QR code it opens my camera I just go ahead
I just go ahead and scan this code right now on just scan this uh this QR code. It automatically
detects that this this account belongs to this user Chris. You can see that right. So it says uh Amazon web services
Chris iPad at the rate root account ID. Now this is the pin a sixdigit pin which changes every 30 seconds. You will see
that there's a timer which is sticking in the right hand side. So after this timer expire a new pin uh generates. Now
I need two pins to complete activation. Two pins are required to complete the activation. So in this case I make a
note of the first pin 98 629. Um 98 uh 629. Okay. So I'll be waiting for the next
pin to show up. 881 950. Okay. 81950 81 950.
Okay. I click on add MFA and this has been done. Now you will see that under uh the multiffactor authentication this
virtual MFA device is being added. This is being done right. So now this user Chris can use this device as the MF
device. Okay. So the next time he signs in what's the process he will follow? Let me just show that how he will sign
in or how he should sign in. So I just open a different browser
and uh I log in on behalf of this user and show exactly that what what he needs to do.
So you will see that the the details like the username and the password is still
punched in. So using the usern password this user will click on sign in. Now immediately will ask for the MFA code.
So this user will just go ahead and um open Google authenticator and see that what's the new PIN he's getting right
now. It's 738047. 738047. He clicks on submit and you will see
that he's he's able to sign it securely. If the PIN is incorrect, he'll be blocked or I would say he'll be asked to
put a new PIN. Right? So this adding a new adding the MFA code or uh this uh the these pins to be punched in will
ensure that every time the person signs in uses a new pin because if if someone uh if if the ID and passwords have been
compromised at least these these pins um are still um I would say the devices with the user these these pins are
randomly generated. So it's very hard for anyone I would say it is kind of impossible for anyone to guess these
pins. It's kind of OTP is generated on on your cell phones or the OTP is generated because um OTP is valid for
only few seconds or few minutes beyond which they expire and you have to use a new pin every time. So this increases
the level of security of of your accounts. Now why you have to increase your account security? Because one thing
you need to understand it's not only the the account you're signing in. This is holding your complete infrastructure.
This is your door the window to to your complete infrastructure. If someone gets the unauthorized access to your account
it can be catastrophic. The person can stop the servers steal your data from databases um or I would say completely
dismantle your your infrastructure. It is important that you understand the relevance of IM. It's a security
type of service. So, IM also comes under the security category and enabling multiffactor authentication of the
accounts ensure that uh in case of events if your comp passwords get compromised or stolen then you are not
leaving any chance or room for your accounts to be accessed by unauthorized users. Okay, this is what exactly the
the process is. Now I showed you how to do that on the IM account. Now based on the best practices, even the root
account owner should be enabling the multiffactor authentication. So if I go to the dashboard, you will see that it
says add MFA for root user. There's there's an option over here on the top. I click on add MFA and as a root user, I
can also enable the MFA for my own account. I can just click on assign MFA and then I can just follow the same
process based on the best practices being laid down by Amazon Web Services. The root account owner must have the MFA
enabled because root account has the highest access. If the root account is compromised, it can lead to disaster.
Right? So in this case, root account must have the MF enabled. Of course, if you if you also enable these these
featur this feature on IM users, it it would be an added bonus. Having said that, this is what exactly the MFA is.
Now, as a as a homework, u you can just try this thing on your own. You can use your smart devices, uh your Android
phone, your iPhone, your your iPads or tablets, whatever you have, and just try it by yourself. It's it's a very simple
process. Okay. All right. Now, let's discuss the next topic. Okay. Let's get started with S3
now. Okay. So, storage class have been purpose built to provide the lowest storage cost for different access
patterns and virtually any use case and uh it includes those with demanding performance needs, data residency
requirements, unknown on changing access patterns or archival storage. It is selected based on the data access
resiliency and cost requirements. You see the thing is that the cost of your data changes based on what's the storage
class you have chosen for your data storage. Let's go with the first storage class that's called standard. Now
standard is a default storage class which is uh which is by default used when you when you start storing your
data by default we go with the standard. Now standard gives you the highest durability, highest availability, faster
access to data. This gives you the highest performance but this is the costliest of the entire fleet of storage
classes. So standard storage class is basically it offers you high durability, availability and performance object
storage for frequently access data. Delivers low latency and throughput appropriate for a wide variety of use
cases. So basically standard um will will put the data in all the variability zones at the back end. If you go with a
standard storage, it will store the data in all the variability zones behind the scenes. So when you for example you
store the bucket in North Virginia, maybe you store the bucket in for example um in in Singapore where
wherever you want to store the data you will see that the data will be stored in uh the data can be stored in um in all
the availability zones of the back end. So standard will use all the availability zones to store your data
into. So it delivers low latency throughput and appropriate for a wide variety of use cases. Right now if I say
it delivers low density throughput you can get faster access to data high durability high availability which means
that your data is being striped across all the variability zones. Hence your data is highly durable and available or
it is more returned in nature. It is used for the cases for example cloud applications dynamic websites content
distribution mobile applications gaming applications and big data analytics. So if your data is very critical, you want
to make it more redundant and you want a faster delivery of the data. You want to that data should be accessed very
fastly, right? Faster delivery of the data and uh the data is very critical for you and you you have to get multiple
copies and store in all the variability zones. In that case you make use of standard as the preferred storage class.
What's the next next option we have? So okay so what are the key features? It has low red performance. It gives you
high durability across all the availability zones. Now it's it is 99.9999.
It is 11 times of the durability. So basically it stripes the data and put the data in all the availability zones
in the back end which means that if one of the goes down your data will still be available to you. It gives you high
variability. It encrypts your data. Uh life cycle management we discuss afterwards. We can also put the uh the
object into a life cycle and get it deleted after some time. Key thing to note is what are the key points to note?
Standard is this is a key is a default storage which gives you the highest amount of durability because your data
is stored across all the at the back end all the availability zones. You get a very faster deliver or faster access to
your data. So this gives you but this is cost and these are the use cases. Fine. So if you don't have any budget issues,
you want the fastest performance or the best performance in terms of making your data more redundant because the data
will be stored in in all the aces at the back end and you want very fast access to data then you go with the standard.
Okay. Now one thing standard is also you is used for those cases when the data is frequently accessed. Frequent access
means that once in a day or once in two days you you access that data frequently access data. How frequently or
infrequently how often you access the data. So if you if you access the data very very often then you go with
standard. Now if you don't access the data very often and you want to bring down the
cost of the storage then you make use of standard IA. I stand for infrequent access. It is standard version of
infrequently accessed data which means that any data that you that you don't access often and you want to bring down
its cost. So it is basically an idea for the data that's accessed less frequently. Any data that you don't
access very often very frequently you just store that as standard IIA required rapid access to the data when whenever
required. It offers high durability high throughput low latency. There's a low per GB storage cost and per GB table
charge. See per GB storage price is is so in terms of storage cost it is cheaper in terms of storage cost this is
cheap however if you retrieve the data if you download the data from from the bucket then there's a per GB retrieval
charge that's why it's used for inf it's it's called infrequent access storage it is used in those cases when you don't uh
you want to bring down the storage cost you don't access data very frequently but if you access that data there would
be per GB retrieval charge I'm thing if you don't access the data very frequently means that you you access the
data once in once in 30 days once in 30 days right so if you access the data once in 30 days if that's that frequency
then you go with the standard IA so it's ideal for long-term storage backups and as a data store for disaster recovery
files so disaster recovery files are not accessed very frequently you'll not be accessing your disaster recovery files
quite often so in those cases you can go with the standard IIA Okay, key features you can see low latency, high throughput
performance of standard, your data is highly durable, put across all the availability zones and uh high
availability, right? So yeah, the key features are same. It's only the the the the storage cost is a bit less. Okay,
now let's imagine that you're confused between standard IIA and standard. See standard I is is is used for those cases
or for that data which you don't access very often. Whereas standard is for that data which you access quite often very
often once in a day once in 2 days or 3 days if the frequency of the data is very high you put as standard. But if
the if the frequency of data is low if you don't access the data very frequently you go with standard IIA. Now
you're confused right now that which which which one of the tool I I shall be going with because sometimes you don't
have any insight that which data you'll be using very frequently or which data you can't which should not be using very
frequently right so you have unknown or changing retrieval patterns access patterns so in that case you go with the
the third option which is called intelligent tearing it automates your uh object based on the frequency of access
or very how often you access the data. It will put your data into different categories and automatically bring down
the storage cost. Let's discuss it. It automatically moves objects between between optimal access
tiers. It optimizes storage cost based on access patterns and it's used for data links and applications with
changing and unknown access patterns. Right? So this is this is the the cycle that goes through. For example, today on
day zero, you put the data and the the data will be stored in the form of frequent access tier. Now, what's a
frequent access tier? Frequent access tier means that the data will be stored very frequently, which is the costiest.
Now, if you don't touch or you don't access or you don't open that data or download the data for 30 days, there's
no access to the data. It's lying in the bucket for 30 days straight. If you don't even just pull the data out, you
get the data, retrieve or download the data. Meaning you don't access the data for 30 days, it's put into a cheaper
storage category that's called infrequent access tier, which is 40% cheaper in terms of storage cost. Now
let's imagine that for 90 days straight to access the data for 90 days for 3 months the data is lying in the bucket.
It is unused, unaccessed. In that case it will put to the archive is 10 accessed tier and save up to 68% of your
storage cost. Now for 180 days for 6 months continuously there was no access to the data. The data is still lying
over there. In that case it would move it to the deep archive access tier and save it up to 95% off on your storage
cost. So based on the frequency of access if you don't access the data very frequently or you don't access the data
very very often then that get it will put the in in a cycle and eventually gradually it will bring down the cost of
the storage. Now there's a very common question people ask me what if I open the data in between let's suppose the
data is lying there in the form of archive instead accessed here it is it is it is still lying over there and I
open the data retrieve the data what happens if you retrieve the data you access the data in between when when the
data is still going through this this complete cycle it's in between it's it's it's staying let's suppose in the form
of archive access to your category any any data if you try to pull it from uh from in
between this life cycle then the data is put in the frequent access tier again and
the cycle repeat itself. So during this life cycle during this complete transitioning if the data is
lying in any of the of these different access stairs and you retrieve the data from there the data is for it the cycle
restarts the the data is restored in the form of frequent access tier and then then the cycle repeat itself again this
is called intelligent tearing there's a there's a small automation cost imposed upon you very small fee But eventually
it will bring down the storage cost. One thing you need to understand even though S3 is a cheaper storage compared to
onremises it's not that much cheap right from business point of view you have to be very cautious in terms of using S3
because you never know when your uh storage consumption goes very high in a given month and uh the bill goes to
thousands. So in this case uh from business point of view you have to make sure that your storage cost is is is up
to the mark where your company can afford can afford it because if you u don't have a control upon storing your
your data and managing your data properly u the storage cost can can go exponentially very high. So SP is not a
cheaper storage in terms it's not it's not dirt cheap it's not that it's it's disposable cheap of course compared uh
compared to uh the on-remises infrastructure deployments it's cheaper but you can't say that it is uh it's in
pennies because for the amount of data that you consume your storage cost can go very high. So these type of uh
strategies that we use eventually bring down the cost of our storage. You'll see a lot of questions in the interviews in
these certification exams from these storage classes. You'll be given some scenarios. Based on the scenarios, you
have to predetermine or you have to judge that which is best at the the best suited storage class. Okay.
Let's go to the next step. Okay. One thing uh the infrequent access storage class uh doesn't accept any
object less than 128 kilobytes. If you put any data less than 128 kilobytes, it will be stay in the frequent access
tier. Right? Which means that if any object is less than 128 kilobytes, it will stay in this category frequent
access tier. Any object less than 128 kilobytes in size will never go through the cycle. You'll be built based on the
frequent access tier. Okay? Then we have something called onezone IIA. What is one zone IIA? One zone I
means that you store the data in just one availability zone. This is a cheaper version of standard IIA. So if you have
any data requirements where your data is not accessed very frequently but your tight on budget then you can store the
data in IIA where the data is stored in just one single availability zone comparative to standardized 20% cheaper.
It is ideal for those customers who are looking for lower cost option for infrequent access data. So behind the
scenes Amazon web services will pick only one availability zone and store your data over there. Okay. So the
number of copies being generated and saved across all the eventually will cut it cut it down to just one a they will
save the storage cost and they will pass on that benefit to you. So if you want to if you're going for a storage uh
option which is which is quite cheap and you don't access data very frequently you just go with the one zone I it's a
lower cost storage option for infrequent access data because the data is stored in just a single a now this there's an
issue if then that a goes down your data is lost so it doesn't guarantees you a high amount of data durability or
redundancy fine what are the use case it's used for storing secondary backup copies of the
ones data or easily recradable data with that that can be reproduced or maybe replicated data. So if you're using the
data for secondary backup as a second so you it's a secondary backup for you have cop multiple copies of the data and uh
in that case if you for example lose that data that's okay you're not going for very high amount of data durability
or availability or redency. So you want to bring down the cost of the storage and you want to make sure
that u uh you're not looking for very high amount of data redundancy in that case you make use of one zone I fine
let's go with the next uh next level archive what do you mean by archive now the three classes I'm going to discuss
with you they they come under archival storage archival storage is is that storage or it's that option in which you
put the data that you don't access for months or even years. So if you have a long-term data retention plan for months
or years to come, then you go for archival options. There are three archive storage classes we have. S3
glacial state retrieval, S3 glacial flexible retrieval and S3 deep archive. Right? These are u the three archive
storage classes. Let's start with the glacial instant retrieval. Whenever you think of archiving the data for
long-term retention of the data, think of glacier. S3 glacio S3 glacial state retrie is used in those cases when your
data needs to be archived but you want immediate access to it whenever there it's required. Okay. So it's designed
for really access long-term data that requires immediate retrieval. Save up to 68% of the storage cost compared to if
you can access storage class. It's ideal for longterm digital preservation of the data that's accessed once per quarter.
Once in three months if you access that data you can store that in glacial stent retrieval which is very cheap. Okay. Now
one thing it's designed for that data which is really accessed long-term data but it needs immediate retrieval when
there's a there's a need there's a need that arises for example medical images media assets genomics data let's take
the example of medical images let's imagine there's a there's a hospital now the hospital has uh a complete database
uh where they store the um u their patients medical history for example CT scans MRI scans uh blood test and uh uh
they have to keep the patient history because let's suppose a patient comes to them after few months and uh the patient
needs immediate uh assistance maybe it's an emergency so the previous history of the patient has
to be retrieved very faster right the patient can come after 1 year or 2 years maybe after few months you you never
know and if the patient is coming back with certain certain issues and the doctors have to u retrieve certain
history the past history from the the hospital's database they need immediate access the data is archived uh the c the
the CT scans MRI scans uh brain scans or the blood test to see the complete past medical history history of the patient
so medical images because images CT scans MRI scans or whatever scans ultrasound scans those images take a lot
of space and they have to be archived somewhere Yeah. So if there there's a comparison that need to be made after
few months then in case of emergencies or agencies those images need to be retrieved much faster. news media
assets, news med media companies basically they have to store their u videos, their uh uh their photographs uh
because when there's a urgent need they have to retrieve those those images and maybe they have to publish some new news
articles, genomics data, research data because in terms of researches uh research go for for years and years
right and uh u if you have if scientists have retrieved some data some research papers was which were archived maybe a
year back then they needed that immediate access to make sure that the research is still being carried on
without any delays e-apers so basically the things that if you if you have any requirement of any data which needs to
be archived but in case of emergency or or in case of urgency you get the immediate access to it you store it as
the glacier stent retrieval okay what's the next Okay, so this is what we have discussed. Then we have
glacia flexible retrieval. Now this is a low storage cost option to back up your data where you can be you get some
flexible retrieval options maybe expedited, maybe bulk, maybe standard. So basically low cost archive with low
retrieval fee. Uh it is suitable for that archive data which doesn't require image access. for that archive data
which does not require image access and you need it needs a flexibility to retrieve large data sets of data at no
cost. There are three retrieval speeds that you get in flexible retrieval expedited 1 to 5 minutes 5 minutes
standard 3 to 5 hours bulk 12 hours. So you can set three retrieval speeds on your data if you have flexibility in
terms of data storage. If you expedite you can download the data in just 1 to 5 minutes time frame. If you go with the
standard retrieval, it takes three to five hours. If you go for bug retrieval, it will take you 12 hours to download
the data back to your computer. So stand retrieval is cheaper option. Uh sorry, flexible retrieval is cheaper compared
to step retrieval. So flexible retrieval will give you three retrieval speeds. Okay. Use cases backup disaster recovery
offset data storage needs and some data that may be occasionally retrieved in minutes. And you don't have to worry
about costs. Okay, last option that we have in terms of storage class is deep archive. What is deep archive? Deep
archive is used in those cases when your data needs to be stored for many years to come, many many years. Right? So,
glacia deep archive is used in those cases when your data is being stored for multiple years. the organizations like
public sectors, health insurance organizations, uh medical insurancees or financial bank, bank services, the
banking organizations, they use uh glacia archive. For example, um I have my account in city bank. I contact city
bank say I want to get my credit card statement or my transaction statement for year uh let's suppose 2019. Now if I
go to city bank website I can only retrieve my statements in the past one year only. But I want to go back into
2019 and and get the transactions from there. So uh city bank will open a ticket with with their IT services. IT
services will go to their archive storage get the data out of there into from for the year 2019 2020 uh financial
year and send me the transaction history. Right? So financial services, banking services, public health sectors,
they have the uh they have the obligation to store the customers data for years to come. Okay. They can't say
no to me saying that okay no sir we don't we can't provide you with that because this is their this is their uh
professional duty to make sure that our data is is preserved with them and we can always ask from ask uh uh uh them
from for data which um which we have right to access whenever we want. So glacia deep archive means that it's used
for long-term digital preservation of the data that's being accessed once per once per year. It eliminates onrem state
liaries libraries which is kind of now I would say obserate now it has two retrieval speeds standard 12 hours bulk
48 hours right so to download the data you have to wait for at least 12 hours half day it is designed for low lowest
cost storage designed for long-term data retention if you want to retain the data for 7 to 10 years to come right so for
example financial services healthcare public sectors they use the glacia deep archive I've gave give the example of
City Bank. Maybe you have some medical insurance uh or maybe any long-term insurance with LIC or Star Health. So,
they have to keep your uh your data for years to come, right? So, for in those cases when uh they have to uh put that
data and store it for for next seven or 10 years at least then they will be going with a glacier deep archive.
Having said that, these are the different storage classes I've discussed with you. Okay.
Now, I was discussing that you can make use of S3 as a version control system, right? So, you want to make sure that
you can store multiple versions of the same file. Okay, let me show you that. If I go back to my Okay, let me just
close this this uh root account. It's creating confusion for me. See right now one thing you need to understand if I go
to my any bucket let's suppose I upload some data to it I add a file I add a let's suppose a ML file it's a it's a
it's a it's a YML document that I upload to it any file right now you will see that if you see the time stamp it says
June 15 2023 851 this is the time stamp it has now let's suppose I upload the same file again I
upload the same file again by default what it will do it will override the file if I show the time
stamp if you if you if I show you the time stamp if you compare that with the previous time stamp you will see that
there's a difference this is the new time stamp and this is the the previous time stamp
so why I paste the time stamp stamps for you I pasted the time stamp for you to just to uh emphasize on a single
important fact that by default whenever you upload the same data again the the name of the object is same and you
upload the same file again the previous file gets overritten whether I make the changes to this file or not it doesn't
matter if the file name is same I upload the same file again and again the previous file gets overritten now this
is this is this may not be uh what I want I want to make sure that I can use a concept called versioning where every
new upload means that uh a new version is being generated. Okay, let's discuss that. So what is versioning? Let me
discuss uh this thing with you. Versioning is basically it is it helps you to keep multiple versions of the
same object in one bucket. It has to be enabled explicitly and the existing objects are not overritten. So the
purpose of the versioning is that you can store multiple versions of multiple variants of same object. The existing
objects are not overrated which means that uh it doesn't uh overrites the previous data. Instead it keeps multiple
versions of it. Okay. So let me show you that if I just uh go ahead and remove this data first and uh let me just
delete this data just delete it. Right. Okay. So, how do I enable versioning? You will see that on the top
there's something called properties. If I go to properties, you will see that we have the bucket versioning enabled. Uh I
would say option showing up. I click on edit and choose enable and click on save
changes. You will see that the bucket versioning is enabled now. Fine. Now I just go
ahead and upload my file again. Now you will see this thing. I'll do one thing. I'll upload this file four to five
times. Okay. So I will just go ahead and try to override this file. See what happens.
So I'm just trying to upload this file multiple times. Yeah. So this is the I think fourth time
I'm uploading the data. I just do it one more time. Let me just upload it one more time. So
I've uploaded the same data with the same name multiple times. Fine. Now if I go to the the this object page right
now, I can't see any change. But as soon as I click on show versions, I can see all the versions of these files
uploaded. Whether I make the change, I don't make the change. I could also make the
change. For example, I go to the same file again and I make some changes. Let's suppose I open this file. Let me
just uh copy this file, put that in the desktop. Okay, let's imagine I open this this
file and I make some changes to the code. For example, I
change certain aspects of this file over here and Okay. Anyhow, I have just changed
certain aspects of this file and uh I just go ahead and upload the same file again. Okay. So I made certain changes
to the file and uploaded it again. Fine. Now if I click on show versions or I click on the object and click on the
versions you will see that the file has changed and uh these are the versions of the file. So every new upload that I
made with or without the changes if the file name is same the previous file was not overritten. This is what a as a
developer I want. As a developer I need to make sure that I can store multiple versions of the same file. As a
developer I want to store multiple versions of the same file. Now these are the version ids. These are generated for
every version. These are autogenerated and every version ID has a corresponding uh timestamp.
Can you see that? Now based on the time stamp I can see okay uh I want to retrieve the file to the old to the
older version or from the previous version based on the previous state. So I can click on any one of these version
ids and get the data. For example, click on this version ID and download the data from this version. Now the question that
comes in why a company will or my developer will store multiple versions of the same file. So think from
developer point of view. Let's let's imagine you're making some changes to the file and you you're pushing those
changes to the production, right? You're making changes to the file and you're pushing the file into production. Let's
imagine after changing uh certain aspects of the file or your software or your code or artifacts, you started
facing some production issues. You release a new software by changing the code, the backend code. And after
you you push those changes, those new changes by updating those files, you starting facing some serious production
issues. Now what's the um what is the the common approach? You go back to the previous version, push that previous
version in the meantime to fix that issue and then you start finding bugs with your new code. You roll back,
right? You have to roll back to the previous version. It's a very common thing right sometimes we make some new
changes it's it's unprecedented or it's un I would say you can't uh sometimes even you do a lot of testing sometimes
autos uh some production issues start h happening after you make the changes to the code so in that case you have to
quickly retrieve back to the previous version of the file and uh you push it in the meantime you start finding bugs
with with your with your or issues with your with your new file maybe you have put some some incorrect entries you have
to rectify them again put them to test and then push it. So it is quite important that in those cases in those
events you have to retrieve the previous versions of the file. But there's one downside of the
versioning versioning will eventually bump up or accentuate or increase your storage cost because every version take
its own space. Okay. So imagine see right now I'm talking about only 178 or 175 bytes of data. Imagine your business
generates 5GB of data every day right and there are uh two versions on an average per GB generated
two versions per GB. So on daily basis you're basically generating 5GB of data and you
have you're generating two versions per GB. So that five that 5GB becomes 10 GB double the amount and you multiply by 30
days in a month it comes out to be 300GB right I'm giving I'm uh taking the a very minimal use case generally business
generated a lot of data on daily basis 10 GB 20 GB so if you have multiple versions generated per GP on daily basis
then that case it will automatically increase your storage cost so versioning though even in even though it's a
wonderful feature. Uh it is uh it used by developers a lot but it can increase the stretch cost to a very high level.
Okay. Okay. Now once you open this document uh this document is a basically it is a uh this is based on the hands-on
or this is the bas on the idea that how you can host a static website on an Amazon S3 bucket.
Um when I discussed with you the basic functionalities or the use cases of S3 I also put uh pointed out that S3 is also
used for static website hosting right so it is also used for static website hosting purpose and uh this means that
without any running a serverside script you can deploy a static website on an Amazon S3 bucket. Uh so in this case we
will be just creating a new bucket from beginning because when when you make a new bucket uh there are few options that
you have to you that you have to select or deselect. I'll explain you those options and thereafter we will just um
go ahead and u uh enable the website hosting uploadh
document make it public test our website. Okay, let's get started with the first step
which is creating a bucket. Now you must have the buckets created but I want you to create a new bucket from beginning
because this is where you have to enable some options which you haven't gone through before which are new for
us. uh I I'll discuss with you some of the the permissions uh that you need to um you need to enforce uh because we
haven't discussed permissions but now I'll just uh help you to understand that how the permissions of these three
buckets really function how you you apply the permissions for the S2 buckets okay so having said that I just go back
to the S3 uh dashboard so you have to go to the S3 dashboard and
you have to create a bucket. I'll give you the standard naming convention that you can I would say that because uh
later on once we discuss the application process we need two buckets. Okay. So when you click on create bucket what you
can do is you can put a name. I can just give you the format of the name. For example let's type in uh Rohan because
the bucket name should be unique in nature. Uh so I'm just putting my name to uh to have that uniqueness in the
bucket name. Uh Intel S3 bucket. Now whatever region you are using right put the name of the region
at the end of it. For example in the short format let's type in VIR. Okay. Why I'm saying why I'm asking you
to put V in the end of it at the end of it. Okay. What you can do is that don't put vir put source.
Okay, I'll let you know exactly when when when when once we start discussing uh the concept of replication. One is a
source bucket, other one's a destination bucket. As if now use this format, the name should be first your first
name, last name
hyphen Intel - S3 bucket, right? and then put swords at the end of
it. The lab that we'll be doing uh next will in we'll be using these names.
Okay, I'll let you know exactly why I'm once you start doing that lab you'll understand why I'm asking to put source
at the end of it. Okay, using this format enter enter your bucket name. Let me know what's done. First name and your
last name combined hyphen intel hyphen S3 bucket hyphen source. Make sure that you put source at the end of it. That's
important. If you don't want to put your first name and last name combined, you just put your last name or only first
name. But make sure the bucket name should be should have source at the end of it because the later on the lab that
we'll be performing uh we'll need to identify between two buckets. One is source bucket, other one is the the dest
bucket. hosting a static website. Uh now region name is not required. Just uh go with this format. Don't put
the region name. Just put your first name and last name. Intel- S3 bucket hyphen source. That's it. Don't put the
name. Uh we are doing this hands-on on hosting a static website using Amazon S3 bucket. Right. and we are uh in the
process of per performing the first uh seven steps steps 1 to 7 the document has been put in the chat and this
document is also within the S3 folder of Google drive I'm going to copy and paste a link
the name of the document is hosting a stud website using an Amazon SD bucket
you have to follow this document and uh based on the document you have to enter the bucket name. This is the
format that you have to pick up. You have to pick. So this is the this should be the format of the uh bucket name
which is first name last name intel s bucket hyphen source. You have to put source at the end of it. That's
important. What's the next thing? Uh the next thing is that you don't have to change the region state in whatever
region you write in. right now. So it doesn't matter which region you choose. Uh the region could be anything. So
there's no need to work upon the region. Let's come to the next thing which is object ownership. Now this thing needs
to be changed. See whenever you apply the permissions, the permissions have to be applied at two levels. I'm talking
about the S3 service. When you start applying the permissions for S3, there are the two levels at
which you can apply the permissions. Number one, the permissions can be applied at the bucket level and number
two at the object level. Already gone through the IM policy demonstration, the customized policy where we uh apply the
permissions at the bucket level and at the object level. Now this object ownership is kind of applying the
permissions at the object level. By default, the objects that we we upload to the S3 bucket are private in nature.
Private means that only you as a bucket owner can access it and you can change um and uh basically and uh uh that
object is not visible on the internet or uh it remains private. Now we want to have the capability or earn the
privilege to make that object public. Let's suppose I have 10 objects private. 10
private objects. Out of those 10 private objects, I want to make five of them public. Public means I can I can create
sharable links and by clicking on those links or I can include those links in my code. The users can access my uh those
uh those five objects. For example, this Google Drive link that I given to you. What is this? This is a sharable link.
This is not private. Right? Similarly, I can create sharable links of my objects, include that in my application code or I
can share those links directly and make my objects public. Now, as stand for access control list, you have to enable
ACL and and gain the ownership of the object. So that being an owner once you gain the ownership then later on you can
change the permissions of the object from private you can make them make them as public or from public you can make
them as private. Basically you gain the authority and the privilege uh to change the permissions of object. So how
to gain that privilege or or that that authority? Just go to ACS enabled. Click on ACS enabled and make sure that you
have chosen the bucket owner preferred which means that you're the bucket owner. You created the bucket. This
bucket resides in your account. Being a bucket owner, you're taking the full ownership of the objects now and
afterwards you have you will have the privilege to change the the the permissions of those objects. You can
make from you can change their uh default uh accessibility from private to public or vice versa. So go to asis
enabled enable the access control list and choose bucket owner uh preferred in object ownership so that you claim the
ownership of these objects and you you gain the rights to change the permissions. What's next? This is where
you you gain the rights to change the permissions at the object level afterwards once the bucket is created.
Now let's work on the bucket permissions. Right? By default uh the bucket is private which means that uh
you cannot make the portion of the bucket public in nature. Now in this case you have to uncheck
this option block all public access and choose I acknowledge that I'm making this thing and this might result in this
bucket and the objects public. You have to uncheck this option and choose acknowledge. Now why why I want to make
the object in the bucket or the entire bucket public in nature because we are going for static website hosting right
we cannot host a public website uh unless and until if you don't uh change the setting right please go ahead and do
that now this does not means that your bucket becomes public in nature because you have to uh make certain objects
public afterwards but right at the bucket level you also are kind of claiming ownership that at the bucket
level I want to do certain operations which can make the entire bucket or some of its objects or all the objects public
in nature. So you have to unblock public access block all public access so that you can host a static website and you we
have to just give a consent there a this kind of a forewarning that turning off the block or public access might result
in this bucket and updates within uh becoming public that's okay just wants to notify you that you're
doing this thing what is the what is the consequence of this thing right so it doesn't means that you're putting your
bucket and objects to security threat but it's a fore warning to ensure that if anything that is very confidential to
your organization, you don't unintentionally expose it to the public. So that's the security warning. That's
it. It's nothing that we're doing out of recommendation or out of best practice because if you're going for website
hosting, these settings need to be in check. Okay. Now, what's the next thing? The next thing is that you have to uh go
to the bucket and there in the uh documentation please enable the version a because later on the the we'll be
doing some life cycle management uh configuration. So uh the lab on that one please enable the version a let me know
what's done this is not there in the document but this is an add-on from my side please click on enable and enable
the versioning. So I reiterate what you what you were supposed to do. You were supposed to put a standard bucket name
format. I've already given to you what format you have to pick. Make sure that you put source at the end of it. Okay.
So we picked a bucket name. Uh no need to change the region. We enable the access control list and we claim the
bucket ownership so that u later on we can change the object permissions. Then we just went ahead and we unchecked
block all public access. So that the reason being uh we want to make sure that we can uh host a static web setup
on this packet. Fine. This is what we did and thereafter I ask you to enable the versioning. You don't have to go for
tagging as if now we don't have to discuss encryption. We'll just discuss it as a separate topic. You don't have
to change any of the advanced settings. Okay. After enable versioning straight away click on create bucket.
Let me know once your bucket is created. Please click on create bucket. Now once this bucket is created the next
thing is that we have to enable a property which is uh static website hosting. Static website hosting is a
property right which has to be enabled. So how to enable that property? Let me show that to enable the that property.
Uh I'll let you know exactly what what the steps are. You have to enable the website hosting that that property that
you have to enable step number from 8 and I just take you up to 8 to 14. Steps 8 to 14. Okay. Now in this case you have
to click on the bucket. Right. Once you click on the bucket, you will see that on the top you have
certain uh options to navigate. Now the option that we're looking for is permissions.
Okay, click on permissions. You click on the bucket. On the top there's a layer of different options. Now you have to go
to the third option from left which is permissions objects properties and permissions. Once you click on the
permissions, you will come to this page. It's not permissions. We already have changed. Actually, I was uh uh I thought
that we have to unblock public access. You have to go to properties. You have to go to properties. Permissions. We
already have applied uh when we made the bucket. You have to go to properties. The option from this uh from the the
second option from the left. Once you go to properties, you have to go straight at the end of
the bucket, right? You'll find out there's a under property, you have this thing called
static website hosting. Fine. So, what you have is that under static website hosting, click on edit
and choose enable. Right? So once you click on edit and choose enable what it says
static website hosting enable hosting type host static website make sure that you have please make sure
that once you have made these two selections go to properties no permissions it was take you have to go
to the left side of the of the permissions that's properties navigate to the bottom most you see the static
website hosting uh option you You have to click on edit. You have to enable the steady website hosting. And the type of
hosting that we're going for is host a static website. What is the second uh second option all
about? Redirect request of an object which we which we are not doing but I want to explain you. So sometimes what
happens that if you go going for a study website hosting uh you can redirect your uh access. So for example this is your
first bucket. Let's suppose I have uh this this bucket one and this is my bucket two. Let me just create a
different bucket. This is my second bucket. Right? So I just go ahead and create a second bucket.
Now redirect request means that if users try to access my first bucket that first bucket will redirect that request for
the website access to the second bucket. Sometimes you might have seen this thing when whenever you just go to a domain uh
that domain is is uh for example xyz.com whatever domain you type in sometimes you're being redirected to a a second
domain name right similarly uh redirecting request to another bucket or domain means that when
you try to uh send a request to a specific bucket you can redirect its request to the second bucket. That's an
that's an optional behavior or that's that's an option that you can have. Sometimes you want to discont disc
continue the first bucket in the meantime you want to redirect the request whatever the reason could be but
this is an a feature that we can use. So any request for uh for accessing the the website is going to the first bucket can
be simply redirected to the second bucket. Right. So this is not yeah jump server is yeah you can see that. So this
is uh what the redirection means which we're not going for which is not applicable in our use case. We're just
going to host a simple study website. Now let's come to the two main pages. So the the if you configure the the
homepage of any website uh the default homepage is index.html. Now you have to type in index.html in
the index document. Now make sure that everything is in lower case because it is case sensitive. I've seen some people
just type in uppercase I. So make sure that it is lowerase only. It is case sensitive. Don't uh put uppercase uh or
caps I. Everything should be in lower case. Put index.html in the index docu indexed document option. Just tab once
you're done with this. Okay. What's the next thing? The next thing that you're supposed to do is that
error document is optional. Error document can be configured. For example, if you want to uh u configure a 404 page
not found. Sometime you're trying to access any uh any page. For example, if I just go to um aws.mazon.com
aws.mazon.com uh and I put a forward slash my first name. Right? So it will deter me 404
page not found something like that right? If I try to look for any page on ados.amazon.com/rohan
which doesn't exist it says sorry this URL doesn't exist no longer available perhaps looking for something else. This
is the error pages you can configure right uh which we're not doing as of now we're going to skip it. It's optional
redirection because we're not doing any redirection. So we don't have to configure the redirection rules in this
case. So once you configure the index for HTML document that's more than sufficient as of now click on save
changes and once you click on save changes again scroll down to the properties and you will see that this
domain is generated this is called bucket website endpoint which is not which is not uh functioning as of now
because we haven't uploaded index html page but just for the sake of uh navigating and see that it's been
enabled we you can just scroll down to the properties and see that the bucket website endpoint is generated. Click on
save changes. You'll be back to the properties page. Scroll down to the bottom. Ensure and confirm that the
bucket website endpoint is generated. Now this endpoint will not lead to anywhere because you haven't configured
the index html page. This type window must now the next thing is that you have to
configure an index HTML document. It is ter. I've already have uh added an index HTML document for you in the Google
drive. If you go to the S3 uh uh folder, you will see the index html uh is already configured for you. I will also
give this this thing uh this index html uh document to you. Done. Okay. What's next? The next thing that you have to do
is that uh if you just go back the document let me just go back to the document now I'll show you step number
16 16 and 16 to 16 to 21. So right now uh you have to go back to the bucket right you're
still on the properties on the properties you have the objects you have to click on the objects and you have to
upload the same index for HTML file that it just got downloaded okay click on upload
click on add files add the index html file that you just uh saved on on your local computer right
Just just add the file. Don't upload it right now. Later on, you have to change its permissions before uploading. But as
of now, go ahead and add the file. Don't upload it as of now. Click on upload and add add this file. Let me know what's
done with this. You have to go to the objects, click upload, and only add the file.
Don't upload it completely as of now. I'll let I'll let you know. You have to change it permissions first. Once you
upload it, go to permissions. and choose grant public read access. Once you add the file before you hit
upload, there's an option which says grant public read access and then you have to give your consent that I
understand the risk that I'm doing this thing. I'm making this this public. That's okay. It's a forewarning. It's
it's a security warning but it is nothing that we're doing uh out of recommendation.
You have to go to grant public access and you have to check uh or give your consent that yes you're doing it with
full intention. Now once you have done this thing you have to finally click on upload.
So basically you have to make the object public before you need to upload it. So basically have to change its
permissions. you have to change its permissions to ensure that um this index html file
is publicly accessible. Are we done with this? Now what's next? There's a close uh button on the top on the right hand
side. There's there's a close uh option. You have to click on close so that you exit from this uploading page. Right?
You'll be back to the objects page. Now once you click on close you'll be back to the updates page. Now the next thing
is that that you have to test your website. So uh step 22 uh 25.
Now what you have to do is uh you need to go to properties again the option which is right to the object the uh
second option from the left. You have to go to the bottom most where you configure the static website hosting.
Now you have to click on the bucket website endpoint. Click on it and now your page will show up.
This is a public uh link. If you share this link with any other user or any other person, he or she will be able to
see the same website. Please go ahead and do that. Done. So this completes the hands-on. Don't delete the bucket as of
now. Uh because the the the next part of the document is asking to delete the bucket. This this is just for the
cleanup process. Right now don't remove any of the buckets or or its objects. Again steps are not important. Concepts
are important. I explain you the concepts of permissions. How to apply the permissions at the object level and
at the bucket level. What is the concept of static website hosting? Redirecting request. Right? Those things are
important for the next uh lab that is called life cycle management. Now what is this life cycle management?
If you remember discussed the concept of uh versioning right we discussed that concept and I when I discussed with you
the the the concept of versioning there's one thing one thing I I pointed out
was that that when you check the multiple versions okay for example let me just see the bucket that
I've created Yeah. So, uh I discussed with you uh the benefits of or uh the advantage of the
versioning that you can maintain multiple versions of the same object and the objects are not over it. This is
primarily this is this feature is prim primarily used by developers uh who want to use a code repository or
artifact repository and um they can they can maintain multiple versions of the code they can track those changes. The
downside of this versioning is that uh because every upload
when you upload the same file again and again with or without the changes it will takes the space inside the bucket.
So this will eventually shoot up your storage cost. Okay, this is the downside I discussed with you. Now the storage
cost is directly proportional to which storage class it it belongs to. Which storage class it belongs to, right? We
discuss storage classes. Standard is the default and the costliest. Standard is the default and the costiest storage
class. Right now we want that we use versioning. Let's suppose I'm a develop
uh I work in a developer team and as a part of uh development team it is mandatory for us to use versioning but
at the same time our management has put strict budget uh restrictions on us and we cannot uh uh consume the storage uh
up to beyond a certain limit. So our storage expenses should be within the budget. Life cycle management ensures
that you put your uh objects into a a cycle a life cycle and after a gap of few days it slowly and gradually
transitions itself to the storage class which which are cheaper and also the objects can be deleted from
your from your buckets permanently. Right? So when when you put your objects into a life cycle uh they they will be
automatically changed to a a cheaper storage class a lower cost storage class and they can be removed permanently from
the bucket. Now when you apply the life cycle management it's being categorized into two
different categories. One is the current version. Current is the latest most recent. This is the the latest.
This is the latest or most recent and the one which are which were uploaded prior to this one. These are
the previous versions or non-current. These are non-current or previous versions.
Previous versions. These are non-current or previous versions. Right? So there are two categories of
versions. The latest the most recent is is current. and the the the versions that are prior to that are non-current.
Right now the lab that we're going to do uh basically it is it will target the non-current versions. I want to make
sure that after after a gap of few days these non-current versions are been changed to a cheaper storage class for
example standard IIA on one zone IIA and after a gap of example 180 days or 150 150 days they are being removed from my
bucket on permanent EPSS. So I will transition these previous versions uh to cheaper storage classes and then
after a gap of few days they have been removed or deleted from my bucket permanently.
Right? This is what you're going to be doing. Now I've already uh jotted down the the requirements of this simple
exercise. It's very simple. I I I can guarantee that you can do it very very uh in a in a very less time frame. it
it's not a u a configuration which requires a lot of thought process or or a lot of clicks right simple process so
what you have to do is that you have to go back to your bucket uh the same bucket that you create uh in which you
enable the steady website hosting and uh then you have to go to management under management you go to life cycle
you'll see that there there will be life cycle rules please go ahead go to the bucket click on the management ment and
look for life cycle rules. You have to go to the management just beneath the management there there's something
called life cycle rules. Let me know once you have reached that place you have to go to the management and you go
you have to go to life cycle rules. Okay. Once you're there you have to click on create life cycle rule.
So you have to create a rule based on which you want to recycle or put your objects into a life cycle in a simple
process through which they can be transitioned to lower cost storage classes and then they can be removed
from a bucket if you want. You have to enter the rule name. For example, name it as test live cycle rule. Okay. Enter
a name. Let me know also done. You can put any name. It's it's a there's no any naming convention that you have to go
for. You can choose the same name. Just tap on done. Okay. Now what's the next thing? Next thing is that choose a rule
scope. I explain you this thing. It says limit the scope of this rule using one or more filters. That's that's a default
one. It says that do you want to limit the scope of this life cycle? For example, I enter a prefix. Let's imagine
PNG. This means that it it will only recycle or put the PNG images in the in the life
cycle. It will be applied to only PNG images or I put for example dot MP3 or AVI. So I can I can explicitly select
that only these uh files with these extensions of this format will be targeted. I can limit the scope of this
rule but right now we'll not do that. We'll choose this option apply to all the objects in this bucket and give you
consent. So you're saying it doesn't matter whether it is an AVI video, an MP3 video, an MP4, MP3
uh song, MP4 video, JPEG, PNG images, PDF, spreadsheet, whatever the the the format of the file
is, I want to apply to to all the objects. I'm saying that it doesn't matter
whether it's go I I apply it to PDF, PNG, JPEG, AVI, MP3 um or document, PPD, whatever the format
of the file is. The extension of the file is I want to apply to all the objects of this bucket. What's next? Now
you have to go to live cycle rule actions. Based on the uh the problem statement,
we need to be targeting non-current versions. Non-current means the versions before current, previous versions. You
have to select these. So you have five check boxes. You have to choose the second check box move the non-current of
the of the objects between storage classes. And fourth check boxes that says permanently delete the non-current
versions. So basically you have to check these options.
I'm saying I want to apply the life cycle to these options. Check and permanently delete the non-current
versions. You have to check both options. Please go ahead and check these uh these
options. Second and fourth. Basically whichever option has the non-current inside it. move non-current
versions of the objects between different storage classes permanently delete the non-current
versions. Basically, this is the second tick and the fourth tick. I am saying I want to
move the the previous version of the objects between different storage classes
the the non versions and alo I need to remove them permanently from my bucket. So basically the thing is
uh you have to check this is the second check second check box and this is the fourth check box. Please go ahead and
check these two options. Whichever options ch has a non-current inside it you have to check it and
explain exactly why we did why we did so. Now based on these two different actions that we we applied will be start
measuring the days the number of days and what what what would be the process to transition. The fifth option is uh
for multiart upload which which we're not doing it right now. This is basically it doesn't applies to any of
the current or non-current. Multiart upload means that if you try to upload something and it doesn't gets uploaded
properly. Multiart upload is a concept where a large file is broken into multiple chunks and uploaded to the SD
bucket. Now if that that process is stopped in between then the few uh chunk of the files which show up in the
bucket. So I'm saying that any incomplete upload needs to be removed. So that doesn't applies to the uh
noncurren. I'm saying that but uh if I'm trying to upload something and the complete file is not uploaded then
remove the remains of it. Right. Now what's the next thing? Now based on the requirement that I have that I have uh I
have proposed or I have uh mentioned these are requirements. Requirement number one the non-current
versions of the objects should be transitioned to standard IIA after gap of 30 days. By default standard is is
selected for every object which is the cost list. Standard IIA as we discussed is used for infrequent access objects
but it's a cheaper version of standard. So I will put 30 days over here. Okay. So what you have to do is that make sure
the the storage class is choose as standard. So under the transition non version of the objects between different
storage classes. The first rule that you have to choose is the that choose the storage class to standard after a cap of
30 days. So I have to put 30 in the box. Now if you put 29 it will throw an error message. Let me show you that if you put
uh let's suppose if you put 29 over here it says minimum of 30 days is required. Can you see that minimum of 30 days is
required before you can transition to standard IA. So there's a there's a a gap that you have to give. So we'll go
for for minimum 38 days gap. So I'm saying my objects will be saved as standard by default after gap of 30 days
they should be changed to standard IIA the previous versions the non current so that I can reduce their cost go ahead
and do that you don't have to go for the third box uh number of new versions to retain that's useless as of now go ahead
and do that done okay so it it um helps us to fulfill the first requirement The non-current versions of the object
should be transitioned to standard I after a gap of 30 days. Now I need to move the non-current versions to one
zone I after a gap of 30 days. So can you see that there's an option which says add transition
right you have to click on add transition in the drop-own menu choose one zone I
now one zone I is again cheaper than standard I we're going standard is a is the cost is standard I is cheaper zone I
is comparatively more cheaper to standard so we are going u to the lower bottom to further reduce the price so
You have to put 90 days as cap. Please go ahead and do that. I post over this. You have to look on add
transition. Choose a storage classes once I and the number of days is 90 days. Just type in
demonstration with this. Now, as a challenge, you have to do the third. So, this is the the second step we have
done. As a challenge, you have to do the third one by yourself. I'll show you the solution. You have to click on add
transition. And you have to perform this third one by yourself. Do do it on your own. Let
me know what's done. I'll show you the solution. I'm not running away from showing you the the exact solution of
it. But use the same method. Do it on your own. You have to further move it to the the cheapest uh storage class which
is a glacial deep archive which which has a maximum retention period of 7 to 10 years. I would say which has the the
average detention period of 7 to 10 years. Glacia deep archive is used by financial services, health sectors,
insurance companies if they have to retain the data for years to come. It takes uh 12 to 48 hours to retain the
data. But Glacia deep archive is the cheapest of all the storage classes. So I want to say that after a gap of 120
days. So from day 0 to day 120 on day 120 the the the data which is the the previous version of the data should be
transitioned should be changed to the cheapest option which is the glacia deep archive.
Do it on your own. So I just click on add transition choose glacia deep archive
right and put 120 days. So I'm saying on date 120 any data which is which is in the previous version category or the non
version category should be should be transition to glacia deep archive which is the cheapest of all the storagees.
Now for glacia team archive for for uh data with smallest size there could be some cost involved. You can choose I
acknowledge that it is a one time life cycle. Now this is not applicable to us because we we not going for day 120.
Okay this is not applicable for us. So just choose it. I acknowledge that there could be a one time uh cost imposed upon
me if I start transitioning smaller objects right the cost is very negligible in sense just check this
option so I'm saying on day so we start from standard on day 30 it becomes standardized storage class day 90 and
day 120 glacia deep archive we are further going down and down and reducing the prices the cost of our storage
especially that data which is the non-cont Fine. And we give a concern that if we
start transitioning uh very small amounts of data uh which which is lesser size, we could we could be increasing a
a one time uh life cycle request cost for that which is not applicable to us because we we're going to be removing
this rule afterwards. Right? Done. What's next? The next thing is that now we want these non-current version of the
objects to be removed from the bucket by retaining the five latest one. In the number of days you have to put in 150
and there's a box which says number of new versions to retain option you have to put five. Now you say what what is
this number five below. So first thing first please configure this thing right now. Now if you see over here what it's
saying is there's a chart that it says there's a chart over here it's uh review transition expiration actions you can
see this chart it it will show on your page so on this based on this chart what it says is that on day zero an object
becomes non-current on day 30 it's moved to standard IA on day 90 it's moved to one zone IA on day 120 it's moved to
glacia deep archive Okay. Now you see that it says zero uh newest non versions are retained. You're
saying that I'm not uh you you also have the option to retain the versions to the to the same classes. For example, uh
let's suppose if I uh we didn't go for for that option, but I'm saying that let's suppose on day 30 I choose the
latest nonredit versions and keep them in standard. So I haven't done that. So I'm saying on day 30 move it to
standard. Move all the objects non-curren to standard. Day 90 day 120 glacia deep archive. On day 150 five
newest non-con conversions are retained. newest and all other non-current versions are permanently deleted. All
other non-current versions are permanently deleted. So what I mean by that you just go ahead and click on
create rule and rule gets created created the rule and this is auto transition only. This is automatically
transitioning your files. This is automation only. Now what you can do is then uh you don't need this rule
anymore. Choose a rule. And there's an option called delete. Click on delete. Right. Choose a rule. Click on delete.
It says delete life cycle rule. Click on delete. Just remove it. You don't need this anymore now. Okay. Please go ahead
and do that. Uh you have to go to day three version and click on it and retain that data. Right? So for example, if I
go to my bucket right now, um if I go to my uh this bucket, right? So based on the time stamp, I can
I can go to any of the dates on day three. Uh I can click on that specific. So based on the time stamp, you can
click on that specific version ID and you click on download and you can retain the data from there. whichever day or
date based on the time stamp you think your file was absolutely perfect in perfect state you can go to the versions
of that file and you can see the timestamp and based on that you can click on that specific version ID click
on that version ID and then click on download that's how you can obtain it next thing application lab now what you
have to do next is now what we're going to do Next is that we have to understand replication. What is replication?
Replication means that you want to enable the replication between two buckets. Which means that if
you put something inside the first bucket that same data uh gets I would say a
copy of that data is generated and saved in the destination bucket. Okay. Now the prerequisite for this exercise that you
have to have two buckets in two separate regions. What you have to do is that first thing first please go to the
buckets dashboard right and uh this is your uh so basically you choose the name of the bucket right what you have to do
is that you have to enter the same bucket name but the second bucket that we're going to create would would have
the uh I would say would have uh the prefix as dest stand for destination okay so the bucket
name should be the Okay, but in the previous bucket you put prefix as source but this time the prefix should be d e s
t. Right. So what you have to do is that initiate the process to create a bucket. Put the bucket name as it is. Copy and
paste it. Just remove just replace it the source with t. So right now we are trying to do the next hands-on bit
application cross region application just put it t at the end of it lepros and this time you have to choose a
different region right you can choose for example uh single so I would say with respect to
the previous bucket this bucket should should be in a different region this bucket should be in a different region
For example, I choose Mumbai. My first bucket is in North Virginia. The source bucket. The destined bucket. I'm I'm
just going to create that in Mumbai. So, put DST at the end of this bucket. Remove source and choose a separate
region this time because we're going for cross region application. The buckets are in separate
regions and want to replicate the content between them. Now in this case you don't have to disable or enable
object ownership or block all public access because we're not using this bucket for static website hosting. Okay
don't have to do any one of these those settings but yes you have to straight away go down and enable bucket
versioning. Now as a prerequisite for uh this replication both the buckets should have the version
enabled. The source bucket has already has the version enabled. The destination bucket should also have to have the
version enabled. So as a prerequisite, if you're if you if you're enabling the application between two buckets, the
main prerequisite is that the versioning should be enabled on both of them. Please go ahead and enable the
versioning. Right? So basically in the second bucket that that you just uh going to create right now, you have to
make sure that you put dest at the end of it. Remove source. So DS DST standard destination as a prefix you choose a
different region for this bucket right and you have to straight away you don't have to change object ownership or
block or public access nothing is required you have to enable the versioning that's it fine once you have
performed these three changes straight away click on create bucket let me know what's
Okay. Now why did I ask you to uh put separate names for these buckets? Because so that you can easily
differentiate. Our intention is that if I put any data in the source bucket, a copy of that data is produced
automatically and saved in the destruction bucket. So it is unidirectional in this case
from source to destination. Now the versioning has to be enabled on source bucket.
Fine. So you can see there are two buckets right now. One with the prefix source other one with the prefixes tst
which stand for destination. You have to click on source bucket and then go to management. You have to click on the
source bucket and go to management. the same option where you configure the life cycle rules you have to click on the
source bucket and just go to the management. Okay, once you go over there, once you
just go to the source bucket and go to the management under life cycle rules, there's
something called replication rules where it's set to zero. There's no replication created. Click on create replication
rule and let me know what's on this page. Once you're on this application rule
configuration, put a rule name. For example, just type in test replication rule. It is a simple name. Even if if
you put in your first name, it will accept it. Right? So there's no any uh any uh rules applied to uh that what
should be the name of it. Put a name. Enter a rule name. Let me know once once you're done with this thing.
Now the status is enabled. Now it will show the show you the source bucket details. So it says uh source
bucket name, right? This shows the source bucket name which means that the application will start from source
source region. Right? Now it says that you want to limit the scope of that application. For example, if I uh apply
a filter like prefix like JPG so only the JPG images will be replicated. But in this case we'll choose apply to all
the objects. Right? So please go ahead and under the rule scope make sure that you choose apply to all the objects in
the bucket. Please go ahead and do that. Just type in done once you're done with this thing. So apply to all the objects
in the bucket means that it doesn't matter what's the the extension of the file. It's a PDF, it's a PNG, it's a
JPG, whatever the extension of the file is, it doesn't matter. I want to apply this uh rule to all the objects of me
bucket. Whatever the whatever the format of the file is or the file extension is, it can be an Excel sheet, it can be a
PDF, a document, a PPT, a PNG image, JPEG image, it doesn't matter. Now it will ask you to choose the destination
uh bucket. Now it says that choose a bucket in this account, is a bucket is in this account or specify bucket in
another account. You could now this is a uh not applicable to us but just an FYI. You can initiate the replication between
two buckets in two separate accounts as well. In that case I have to specify the
12digit root account ID and the bucket name in that root account. But in our case we'll choose a bucket in
this account only. Once we choose this this thing choose a bucket in this account. It says that all
right. So then browse for the bucket name which is your destination bucket right. So I click on browse S3 and I
look for the bucket name which ends with T ST this one and click on choose path. So basically you have to browse for the
destination bucket under the destination box. So what I'm saying is that uh this is my source bucket. This is my
destination bucket in the same account, right? But they are deployed in different regions. What's next?
Now in order to initiate this communication, an IM role is needed. You choose you go to IM role and choose
create new role. Now this will automatically create a new role so that this the all
the necessary permissions are enabled at the back end so that the source bucket contents can be replicated to the
decision bucket. So it will automatically create a new role for you at the back end. Please go ahead and do
that. Go to IM role and just uh by default it's selected choose from existing IM
roles. Under IM role, choose create a new role. Now, if you choose create a new role, it will automatically create a
new role for you. So that the source bucket has all the permissions to replicate its content to to the
destination bucket because there's a this source bucket is trying to access the destination bucket.
It's a bucket to bucket communication. So I'm applying this rule so that the source bucket g gets enough permissions
or we give give enough permissions for this communication to happen. What's next? You don't have to go for
encryption as of now. Destination storage class. So I'm saying that uh now this this destination bucket is my
secondary backup. I want to bring down its price. By default my uh source bucket will have
the standard uh data stored in the standard format. I want for this destination I want a cheaper version.
So under destination storage class choose to change the storage class for the replicated objects and choose one
zone I okay so I'm saying choose destination storage class change the storage class for the replicated
objects and choose it to one zone I now one zone I is a cheaper option now it says minimum duration storage 30 days of
course after uh after 30 days you you start getting build from a zonia Okay. And uh once I puts the data in just one
availability zone that's okay this is our secondary backup. This will bring down the cost of our storage. So under
the distance storage class I'm saying change the storage class for the replicated objects and choose one zone
IA. One zone IA is applicable in those case in this these cases only. One zone I
means that your data is put in just one availability zone. If that availability zone goes down, your entire data is
lost. But that's okay because we have duplicate bucket source bucket which already has that content. For duplicate
backups, we choose one I which is quite cheap. Then what's next? You don't have to go for additional application options
and click on save. Let me know what's wrong with this. Now once you click on save you get this
option replicate existing objects. Let me know once once you come once you get this prompt
it's asking you it's asking you do you want to replicate the existing objects or you want to start from now onwards
right so you have to choose no I don't want to replicate existing objects so from now onwards if I put any new object
inside my source bucket that will show up in the destination bucket I don't want to replicate the previous objects
because for the previous objects or the existing objects. It will run a a batch operation job. It takes a lot of time.
So you have to choose no, do not replicate existing objects. I click on submit.
Now we're going to be testing it out. To test it, you click on your source bucket.
Okay? And try to upload something. any file from your computer be it any file it's a simple text file a word document
uh a PDF file whatever just choose a small file you go to a source bucket click on upload add file choose any of
the files on your computer it doesn't matter I choose a simple file add the file and click on upload so
go ahead and upload a simple file to your source bucket Go ahead and do that.
Just type it down. Go to your source bucket and try to upload any simple file. Go to
the objects. Try to upload any simple file. It can be an image on a computer. It can be a simple text file.
Right? So destination bucket ends with d s. Source bucket ends with s o ur. So these two buckets should be different.
By mistake, you're choosing the source bucket as a destination bucket. Now once you upload any data to the
source bucket, go to your bucket list and click on your destination bucket and you will see that within few seconds the
same file will show up. The same file will show up but this time a storage class would be one I am. Right? This is
uh the businesses use the strategy for uh for disaster recovery and data migration as well. So that uh
if you lose any one of these two buckets or its contents you have still have one bucket to retain the data from. After uh
within few seconds you just hit there's a refresh button right you can see there's a refresh button. Hit refresh
few times it will show up. If the file is is a bit larger, it can take some time. But if you if you hit refresh few
times, it will show up. This is what the the replication stuff is. Simple. Fine. You understood that
the the process of cross region replication. Now this is unidirectional from source
to destination. This is not birection from destination to source. If you want to configure birectional then you have
to configure the uh you have to configure uh you have to enable the configuration another configuration on
on the destination bucket while choosing source bucket as the target bucket right so you can also have birection but this
is a unid direction application from source to destination whatever you put in the source will show up in the
destination okay fine now what you can do is that you don't need this application anymore this is optional
You can go to uh management uh and you go to your bucket go to your source go to the management
choose the application rule and click on delete now there's no cost for that but if you want you can delete the
application rule okay that's it now I've listed down the process now the buckets are free of cost but if you wish to
remove the bucket I have jotted down the process to to delete the bucket in the document itself. Step number six, but if
you don't delete it, there's no cost for you. Now, it is see replication will it's only applicable to the uploading
process, not deletion process. You understand the point? One thing you need to understand the replication is only
applicable to uploading data, not deleting off the data. Right? So if you delete the contents in
the source bucket after certain days by default they will still show up in the destination bucket
that's what the purpose of the replication is. If by mistake if you remove some of the files in the source
bucket by mistake and those files are very important for you you can still recover that the same files from the
destination bucket right that's what the purpose of the application is. So if you remove the uh by mistake if you remove
let's suppose 10 important files from the source bucket by mistake then you can recover the same
files from the destruction bucket. So this application applies to the uploading process only. Just a quick
info guys intellipath brings you executive postgraduate certification in cloud computing and devops in
collaboration with IHub Diva Sur IIT Rudi. Through this program, you will gain in- demand skills like AWS, DevOps,
Kubernetes, Terraform, Azure, and even cutting edge topics like generative AI for cloud computing. This 9-month online
boot camp features 100 plus live session from IIT faculty and top industry mentors. 50 plus real world project and
a two-day campus immersion at IIT Ruriki. You also get guaranteed placement assistance with three job
interviews after entering the placement pool. And that's not all. This program offers Microsoft certification, a free
exam voucher and even a chance to pitch your startup idea for incubation support up to rupees 50 lakhs from IB Diva Samp.
If you are serious about building a future in cloud and DevOps, visit the course page linked in the description
and take your first step toward an exciting career in cloud technology. [Music]
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries

100 Most Important MCQs on Cloud Computing by Neptel
In this video, Neptel presents a comprehensive set of 100 multiple-choice questions (MCQs) on cloud computing, covering key concepts, service models, and deployment strategies. Viewers are encouraged to listen carefully and memorize the questions for better understanding and preparation.

Comprehensive Overview of Network Engineering Concepts
This video series, led by Brian Ferrill, covers essential topics in network engineering, including network devices, protocols, virtualization, and cloud computing. It provides a thorough understanding of both foundational and advanced concepts necessary for configuring, managing, and troubleshooting networks.

Docker Tutorial: Comprehensive Guide from Basics to Advanced Concepts
This video provides a complete overview of Docker, covering installation, predefined images, Docker Hub, container management, and advanced topics like volumes and networking. By the end, viewers will have a solid understanding of Docker's functionalities and how to effectively use it in their projects.

Comprehensive Overview of Project Management Concepts and Practices
This video provides an in-depth exploration of project management, covering essential topics such as project planning, execution, monitoring, and closure. It also discusses the roles of project managers, key project management tools, and the significance of certifications like PMP.

Comprehensive Guide to Ethical Hacking: From Basics to Advanced Concepts
This video provides an in-depth overview of ethical hacking, covering essential concepts such as networking, IP addresses, and the importance of cybersecurity. It also discusses the significance of ethical hacking in combating cybercrime and the skills needed to excel in this field.
Most Viewed Summaries

A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.

Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.

How to Use ChatGPT to Summarize YouTube Videos Efficiently
Learn how to summarize YouTube videos with ChatGPT in just a few simple steps.

Pag-unawa sa Denotasyon at Konotasyon sa Filipino 4
Alamin ang kahulugan ng denotasyon at konotasyon sa Filipino 4 kasama ang mga halimbawa at pagsasanay.

Ultimate Guide to Installing Forge UI and Flowing with Flux Models
Learn how to install Forge UI and explore various Flux models efficiently in this detailed guide.