Introduction to Cloud Computing
Cloud computing delivers scalable services such as storage, computing power, and networking over the internet. This course provides a thorough introduction covering key topics in AWS and Azure, including cloud security, deployment models, service models (IaaS, PaaS, SaaS), and cloud architecture principles. For a deeper foundational understanding, consider exploring Understanding Cloud Computing: A Comprehensive Guide to AWS and S3.
AWS Overview
- History and Growth: Launched in 2002 with rapid expansion; offers over 100 cloud services.
- Key Services: EC2 (compute), S3 (storage), Lambda (serverless computing), Elastic Beanstalk (application deployment), Route 53 (DNS), and more. Learn more about these core services in Top AWS Services Explained for Beginners: EC2, S3, IAM & More.
- Security: Robust data protection including IAM, encryption, and compliance.
- Usage: Adopted by companies like Netflix, Airbnb, and Adobe for scalability and performance.
Azure Overview
- Launch and Market Adoption: Launched in 2010, supports 80% of Fortune 500 companies.
- Data Centers: Extensive global presence with 42+ data centers.
- Services: Virtual Machines, Azure Functions (serverless), networking (CDN, ExpressRoute), storage solutions, databases, AI & ML services.
- Security & Management: Comprehensive identity management, security center, key vault, and monitoring tools. For an in-depth Azure developer perspective, see Complete Microsoft Azure Developer Associate (AZ-204) Study Guide.
Cloud Security Essentials
- Emphasizes encryption, access control with MFA, monitoring, secure API practices.
- Addresses threats like data breaches, denial of service attacks, insider threats.
- Best practices include patch management, employee training, network segmentation, and continuous auditing.
Core AWS Services in Detail
- S3: Object storage with lifecycle management, bucket policies, versioning, encryption, cross-region replication, and transfer acceleration.
- IAM: User, groups, roles, and policies management with MFA and federated access.
- ECS: Container management service orchestrating Docker containers with Fargate and EC2 modes.
- Elastic Beanstalk: Platform to deploy and manage applications easily with auto-scaling and load balancing.
- Route 53: Scalable DNS service supporting routing policies like failover, geolocation, latency-based, and weighted routing.
AWS Advanced Services
- SageMaker: Managed machine learning platform enabling building, training, tuning, and deploying ML models with integrated tools.
- CloudFront: Content Delivery Network providing secure, low-latency content delivery globally.
- Auto Scaling: Automatic infrastructure scaling based on application demand, integrating with load balancers.
- Redshift: Fully managed data warehouse with fast, scalable analytics.
Learning and Career Advancement
- Comprehensive courses like Post-Graduate Cloud Computing Program, Cloud Solutions Architect Masters, and AWS/Azure certifications.
- Hands-on projects spanning website hosting, expense tracking apps, IoT analytics, video streaming services, and machine learning deployments.
- Preparation for interviews via practical scenario questions and answers across core AWS and Azure topics. For targeted exam readiness, check out the Ultimate Guide to Azure DevOps Certification Course: Pass the Exam with Confidence.
Key Takeaways
- Cloud computing offers flexible, scalable, and cost-efficient solutions vital for modern business innovation.
- A solid grasp of AWS and Azure services, architectures, and security fundamentals is essential.
- Hands-on projects and certifications are crucial for proving expertise and advancing careers.
- Continual learning and adaptation to new tools and services enable long-term success in cloud careers.
This summary equips learners and professionals with foundational knowledge and practical insights to excel in cloud computing roles, focusing on the most widely used cloud platforms and services.
Hello and welcome to this cloud computing full course by simply learn. Did you know that the cloud engineers
are among the most in demand professionals today? As more companies shift online, cloud computing is rapidly
expanding making skilled cloud professional highly valuable. With excellent salaries and career growth
opportunities, it's a great time to enter this field. But what exactly is cloud computing? In simple terms, it is
the delivery of service like storage, computing power and networking over the internet known as the cloud. This
technology allows businesses to scale quickly and efficiently. So in this course, we will walk you through
essences topics such as an introduction to cloud computing and steps to become a cloud engineer. We will also cover cloud
security, compare AWS and Azure and explore every topic you need to know in order to master cloud computing and AWS.
We will also discuss project ideas and certification to boost your cloud computing career. So before we begin, if
you are interested in getting certified in cloud, check out Simpson's cloud architect certification program. Build
expertise in AWS, Microsoft Azure and GCP with our cloud architect certification course. Plus, we have
included an exam voucher for any one Azure course. So you can get certified hassle-free. Gain access to official AWS
authorized self-arning content and master the ins and outs of cloud architectural principle. So don't forget
to check out the course link from the description box below and the pin comment. So without any further ado,
let's get started. Imagine you're the owner of a small software development firm and you want to scale your business
up. However, a small team size, the unpredictability of demand and limited resources are roadblocks for this
expansion. That's when you hear about cloud computing. But before investing money into it, you decide to draw up the
differences between on-remise and cloud-based computing to make a better decision. When it comes to scalability,
you pay more for an on-remise setup and get lesser options, too. Once you've scaled up, it is difficult to scale down
and often leads to heavy losses in terms of infrastructure and maintenance costs. Cloud computing on the other hand allows
you to pay only for how much you use with much easier and faster provisions for scaling up or down. Next, let's talk
about server storage. On premise systems need a lot of space for their servers, notwithstanding the power and
maintenance hassles that come with them. On the other hand, cloud computing solutions are offered by cloud service
providers who manage and maintain the servers, saving you both money and space. Then we have data security.
On-remise systems offer less data security thanks to a complicated combination of physical and traditional
IT security measures. Whereas cloud computing systems offer much better security and let you avoid having to
constantly monitor and manage security protocols. In the event that a data loss does occur, the chance for data recovery
with on-remise setups are very small. In contrast, cloud computing systems have robust disaster recovery measures in
place to ensure faster and easier data recovery. Finally, we have maintenance. On premises systems also require
additional teams for hardware and software maintenance, loading up the costs by a considerable degree. Cloud
computing systems on the other hand are maintained by the cloud service providers, reducing your costs and
resource allocation substantially. So now thinking that cloud computing is a better option, you decide to take a
closer look at what exactly cloud computing is. Cloud computing refers to the delivery of ondemand computing
services over the internet on a pay as you go basis. In simpler words, rather than managing files and services on a
local storage device, you'll be doing the same over the internet in a costefficient manner. Cloud computing
has two types of models, deployment model and service model. There are three types of deployment models: public,
private, and hybrid cloud. Imagine you're traveling to work. You've got three options to choose from. One, you
have buses, which represent public clouds. In this case, the cloud infrastructure is available to the
public over the internet. These are owned by cloud service providers. Two, then you have the option of using your
own car. This represents the private cloud. With the private cloud, the cloud infrastructure is exclusively operated
by a single organization. This can be managed by the organization or a third party. And finally, you have the option
to hell a cab. This represents the hybrid cloud. A hybrid cloud is a combination of the functionalities of
both public and private clouds. Next, let's have a look at the service models. There are three major service models
available. EAS, pass and SAS. Compared to on-remise models where you'll need to manage and maintain every component
including applications, data, virtualization, and middleware, cloud computing service models are
hassle-free. Refers to infrastructure as a service. It is a cloud service model where users get access to basic
computing infrastructure. They are commonly used by IT administrators. If your organization requires resources
like storage or virtual machines, is the model for you. You only have to manage the data runtime, middleware,
applications, and the OS while the rest is handled by the cloud providers. Next, we have pass. Pass or platform as a
service provides cloud platforms and runtime environments for developing, testing, and managing applications. This
service model enables users to deploy applications without the need to acquire, manage, and maintain the
related architecture. If your organization is in need of a platform for creating software applications, pass
is the model for you. PASS only requires you to handle the applications and the data. The rest of the components like
runtime, middleware, operating systems, servers, storage, and others are handled by the cloud service providers. And
finally, we have SAS. SAS or software as a service involves cloud services for hosting and managing your software
applications. Software and hardware requirements are satisfied by the vendors. So you don't have to manage any
of those aspects of the solution. If you'd rather not worry about the hassles of owning any IT equipment, the SAS
model would be the one to go with. With SAS, the cloud service provider handles all components of the solution required
by the organization. Time for a quiz now. In which of the following deployment models are you as the
business responsible for the application data and operating system? One is two pass three SAS four is and pass. Let us
know your answer in the comment section below for a chance to win an Amazon voucher. Meet Rob. He runs an online
shopping portal. The portal started with a modest number of users, but has recently been seeing a surge in the
number of visitors. On Black Friday and other holidays, the portal saw so many visitors that the servers were unable to
handle the traffic and crashed. Is there a way to improve performance without having to invest in a new server?
Wondered Rob a way to upscale or downscale capacity depending on the number of users visiting the website at
any given point. Well, there is. Amazon Web Services, one of the leaders in the cloud computing market. Before we see
how AWS can solve Rob's problem, let's have a look at how AWS reached the position it is at now. AWS was first
introduced in 2002 as a means to provide tools and services to developers to incorporate features of Amazon.com to
their website. In 2006, its first cloud services offering was introduced. In 2016, AWS surpassed its 10 billion
revenue target. And now AWS offers more than 100 cloud services that span a wide range of domains. Thanks to this, the
AWS cloud service platform is now used by more than 45% of the global market. Now let's talk about what is AWS. AWS or
Amazon Web Service is a secure cloud computing platform that provides computing power, database, networking,
content storage, and much more. The platform also works with a pay as you go pricing model, which means you only pay
for how much of the services offered by AWS you use. Some of the other advantages of AWS are security. AWS
provides a secure and durable platform that offers end to-end privacy and security experience. You can benefit
from the infrastructure management practices born from Amazon's years of experience. Flexible. It allows users to
select the OS, language, database, and other services. Easy to use. Users can host applications quickly and securely.
Scalable. Depending on user requirements, applications can be scaled up or down. AWS provides a wide range of
services across various domains. What if Rob wanted to create an application for his online portal? AWS provides compute
services that can support the app development process from start to finish. From developing, deploying,
running to scaling the application up or down based on the requirements. The popular services include EC2, AWS
Lambda, Amazon Light Cell, and Elastic Beantock. For storing website data, Rob could use AWS storage services that
would enable him to store, access, govern, and analyze data to ensure that costs are reduced, agility is improved,
and innovation accelerated. Popular services within this domain include Amazon S3, EBS, S3 Glacier, and
Elastic File Storage. Rob can also store the user data in a database with AWS services, which he can then optimize and
manage. Popular services in this domain include Amazon RDS, Dynamo DB, and Red Shift. If Rob's businesses took off and
he wanted to separate his cloud infrastructure or scale up his work requests and much more, he would be able
to do so with the networking services provided by AWS. Some of the popular networking services include Amazon VPC,
Amazon Route 53, and Elastic Load Balancing. Other domains that AWS provides services in are analytics,
blockchain, containers, machine learning, internet of things and so on. And there you go. That's AWS for you in
a nutshell. Now, before we're done, let's have a look at a quiz. Which of these services are incorrectly matched?
1 [Music] 2
3 4. We'll be pinning the question in the comment section. Comment below with your
answer and stand a chance to win an Amazon voucher. Several companies around the world have found great success with
AWS. Companies like Netflix, Twitch, LinkedIn, Facebook, and BBC have taken advantage of the services offered by AWS
to improve their business efficiency. And thanks to their widespread usage, AWS professionals are in high demand.
They're highly paid and earn up to more than $127,000 perom. Once you're AWS
certified, you could be one of them, too. Hello everyone, welcome back to the channel. Today, I want to take you on a
journey that could transform your career, much like how cloud computing has transformed some of the world's most
innovative companies. Imagine Netflix, once a DVD rental service transforming into a streaming giant capable of
delivering highdefinition content to millions of users simultaneously. Or consider Airbnb, which has used cloud
computing to manage listings and bookings for millions of properties around the globe, providing a seamless
experience for host and travelers alike. Both Netflix and Airbnb utilized cloud technologies to efficiently scale their
businesses, manage large volumes of data and ensure high availability and performance. So by transitioning from
traditional costly and inflexible on premises infrastructure to scalable cloud environments, they significantly
reduce cost, accelerated innovation and improved user experience in real time. Now you might think that working on such
impactful projects requires years of experience and advanced degrees. But there's the good news guys. With the
right approach, you can start a career in cloud engineering in just 3 months. even if you are starting from scratch.
In this video, I will outline a clear, actionable plan that uses entirely free online resources to get you there. We
will cover the essential skills you need to learn, the certifications that can help validate your knowledge, and
practical projects that will make your resumes stand out. So, if you're ready to dive into the world of cloud
computing and perhaps one day contribute to the next big thing in tech, so stay tuned, guys. So, let's get started. And
the number one point you should start with is starting your cloud journey. So transitioning into cloud engineering may
seem daunting especially if you are new to this field. The first step is understanding why this is a valuable
career move. The cloud industry is booming with a projected market value of $800 billion by 2025 and the potential
to grow even further. This growth means a constant demand for skilled professionals making it an excellent
time to enter the field. Now that we understand the industry's potential, the next question is where should you start?
So you should choose a cloud provider. So choosing a cloud provider is a critical decision as it shapes your
learning path and future jobs opportunities. So the three major players are AWS, Azure and Google Cloud
Platform GCP. So starting with AWS. So AWS that is Amazon web services is often recommended for beginners because it has
the largest market share and a wide range of services which translates into more job opportunities. Now coming to
Azure that is another strong option especially if you're targeting jobs in enterprises that use Microsoft
technologies. Now coming to GCP that is Google cloud platform and it is gaining popularity and offers excellent features
especially in data analytics and machine learning. For beginners AWS is popular choice due to its widespread use and
extensive documentation. However, it's important to research the demand in your local job market and consider your own
interest when making a decision. And with the cloud provider chosen, the next step is to build a strong foundation in
the fundamental technologies that underpin cloud computing. So now before diving into cloud specific services,
it's essential to understand the foundational technologies that cloud computing relies on. These include
number one comes networking. So understanding how data moves across networks is crucial for setting up and
managing cloud infrastructure. Then comes operating systems. Familiarity with operating systems particularly
Linux is essential as most crowd environments run on Linux servers. Then comes virtualization. So this is the
process of creating virtual instances of physical hardware. That's a core concept in cloud computing. And then comes
databases. So knowledge of databases both relational and non-reational is critical for managing data in the cloud.
So with these foundational skills in place you are now ready to explore cloudspecific learning paths. So let's
start with certifications. So certifications can validate your knowledge and make you stand out in the
job market. For AWS, starting with the AWS cloud practitioner certification is advisable. This certification provides a
broad overview of cloud concepts and AWS services. It covers key areas such as compute services, storage options,
security measures, networking capabilities, and billing and pricing structures. Now coming back, while
certifications are valuable, they need to be complemented with practical hands-on experience to truly demonstrate
your skills. Here comes building projects or hands-on practice. So building projects is the most effective
way to apply what you have learned and to demonstrate your abilities to potential employers. So here are a few
beginner friendly projects to consider. Number one is setting up virtual machines. So start by launching an EC2
instance on AWS. Learn about the different instance types, configurations and the basics of server management.
Then comes the next project that is cloud storage systems. So experiment with services like S3 for object storage
and RDS for relational databases. Document the use cases and differences between these services. Then deploy a
web application. Host a static website using S3 and CloudFront which will teach you about web hosting, content delivery
and the basics of DNS management with route 53. Initially you can use the AWS console for these task but as you
progress try implementing these projects using infrastructure as core tools like Terapform. This approach not only
deepens your understanding but also aligns with industry best practices. In addition to practical projects, having
some coding knowledge can greatly enhance your capabilities as a cloud engineer. So now we'll see how you can
learn to code. While not always mandatory, coding skills can significantly enhance your effectiveness
as a cloud engineer. Languages like Python and Bash are particularly useful for scripting and automation. Even a
basic understanding can help with tasks such as writing scripts or server automation, managing cloud services or
resources programmatically, then implementing infrastructure as code. For those new to coding, check out simply
learn videos on YouTube, which offers excellent starting points. Coding skills not only make you more versatile, but
also open up opportunities to specialize in areas like DevOps or cloudnative development. And once you have built
your skills and some projects, it's time to start with the job hunting process. That is building your profile. Creating
a strong online presence is crucial when job hunting. Your LinkedIn profile should clearly reflect your new skills,
certifications, and projects. So here are some tips. Number one is optimize your LinkedIn profile. That is include a
professional photo and engaging summary and detailed description of your projects. Then comes network actively.
Connect with professionals in the field. Join cloud computing groups and participate in discussions. And then
comes apply strategically. Tailor your resume for each job application, highlighting the skills and projects
that align with the job description. Applying for jobs can be a number game. So be persistent. It's also helpful to
reach out to recruiters or hiring managers directly to express your interest in the role. As you start to
gain experience in your first cloud role, consider specializing in a niche area to advance your career. And then
comes specializing and continuous learning. So specializing in a particular area of cloud computing can
make you more valuable and increase your earning potential. Possible specializations include DevOps that is
it focus on automation, continuous integration and continuous deployment practices. Then comes serverless
computing work with functions as a service that is FAS and other serverless architectures. And then comes security.
specialize in cloud security to protect data and infrastructure. The cloud industry is dynamic with new tools and
technologies emerging regularly. So continuous learning is key. So stay updated through online courses, webinars
and industry news. Finally, remember that the journey into cloud engineering is continuous and ever evolving. So we
talk about resources. So embarking on a career in cloud engineering is challenging but highly rewarding.
Utilize free resources like YouTube tutorials, community forums and documentation to guide your learning.
Before cloud computing existed, if we need any IT servers or application, let's say a basic web server, it does
not come easy. Now, here is an owner of a business and I know you would have guessed it already that he's running and
successful business by looking at the hot and fresh brewed coffee in his desk and lots and lots of paperwork to review
and approve. Now he had a smart not only smartlooking but a really smart worker in his office called Mark and on one
fine day he called Mark and said that he would like to do business online. In other words, he would like to take his
business online and for that he needed his own website as the first thing. And Mark puts all his knowledge together and
comes up with this requirement that his boss would need lots of servers, uh, database and softwares to get his
business online, which means a lot of investment. And Mark also adds that his boss will need to invest on acquiring
technical expertise to manage the hardware and software that they will be purchasing and also to monitor the
infrastructure. And after hearing all this, his boss was close to dropping his plan to go online. But before he made a
decision, he chose to check if there are any alternatives where he don't have to spend a lot of money and don't have to
spend acquiring technical expertise. Now that's when Mark opened this discussion with his boss and he explained his boss
about cloud computing and he explained his boss the same thing that I'm going to explain to you in some time now about
what is cloud computing. What is cloud computing? Cloud computing is the use of a network of remote servers hosted on
the internet to store, manage and process data rather than having all that locally and using local server for that.
Cloud computing is also storing our data in the internet from anywhere and accessing our data from anywhere
throughout the internet. And the companies that offer those services are called cloud providers. Cloud computing
is also being able to deploy and manage our applications, services and network throughout the globe and manage them
through the web management or configuration portal. In other words, cloud computing service providers give
us the ability to manage our applications and services through a global network or internet. Example of
such providers are Amazon web service and Microsoft Azure. Now that we have known what cloud computing is, let's
talk about the benefits of cloud computing. Now I need to tell you the cloud benefits is what is driving cloud
adoption like anything in the recent days. If I want an IT resource or a service now with cloud, it's available
for me almost instantaneously and it's ready for production almost the same time. Now this reduces the go live date
and the product and the service hit the market almost instantaneously compared to the legacy environment and because of
this the companies have started to generate revenue almost the next day if not the same day. Planning and buying
the right size hardware has always been a challenge in legacy environment. And if you're not careful when doing this,
we might need to live with a hardware that's undersized for the rest of our lives. With cloud, we do not buy any
hardware. But we use the hardware and pay for the time we use it. If that hardware does not fit our requirement,
release it and start using a better configuration and pay only for the time you use that new and better
configuration. In legacy environments, forecasting demand is an full-time job. But with cloud, you can let the
monitoring and automation tool to work for you and to rapidly scale up and down the resources based on the need of that
R. Not only that, the resources, services, data can be accessed from anywhere as long as we are connected to
the internet. And even there are tools and techniques now available which will let you to work offline and will sync
whenever the internet is available. Making sure the data is stored in durable storage and in a secure fashion
is the talk of the business and cloud answers that million-doll question. With cloud the data can be stored in an
highly durable storage and replicated to multiple regions if you want and uh the data that we store is encrypted and
secured in a fashion that's beyond what we can imagine in local data centers. Now let's bleed into the discussion
about the types of cloud computing. Very lately there are multiple ways to categorize cloud computing because it's
ever growing. Now we have more categories. Out of all these six sort of stand out you know categorizing cloud
based on deployments and categorizing cloud based on services and again under deployments categorizing them based on
how they have been implemented. You know is it private is it public or is it hybrid and again categorizing them based
on the service it provides. Is it infrastructure as a service or is it platform as a service or is it software
as a service? Let's look at them one by one. Let's talk about the different types of cloud based on the deployment
models. First, in public cloud, everything is stored and accessed in and through the internet. And um any
internet users with proper permissions can be given access to some of the applications and resources. And in
public cloud, we literally own nothing. Be it the hardware or software, everything is managed by the provider.
AWS, Azure and Google are some examples of public cloud. Private cloud on the other hand with private cloud the
infrastructure is exclusively for an single organization. The organizations can choose to run their own cloud
locally or choose to outsource it to a public cloud provider as managed services and when this is done the
service the infrastructure will be maintained on a private network. Some examples are VMware cloud and some of
the AWS products are very good example for private cloud. Hybrid cloud has taken things to the whole new level.
With hybrid cloud, we get the benefit of both public and private cloud. Organizations will choose to keep some
of their applications locally and some of the application will be present in the cloud. One good example is NASA. It
uses hybrid cloud. It uses private cloud to store sensitive data and uses public cloud to store and share data which are
not sensitive or confidential. Let's now discuss about cloud based on service model. The first and the broader
category is infrastructure as a service. Here we would uh rent the servers network storage and we'll pay for them
in an hourly basis but we will have access to the resources we provision and for some we will have root level access
as well. EC2 in AWS is a very good example. It's a WM for which we have root level access to the OS and admin
access to the hardware. The next type of service model would be platform as a service. Now in this model the providers
will give me a pre-built platform where we can deploy our codes and our applications and they will be up and
running. We only need to manage the codes and not the infrastructure. Here in software as a service the cloud
providers sell the end product which is a software or an application and we directly buy the software on an
subscription basis. It's not the infra or the platform but the end product or the software or a functioning
application and we pay for the hours we use the software and in here the client maintains full control of the software
and does not maintain any equipment. Amazon and Azure also sell products that are software as service. This chart sort
of explains the difference between the four models starting from on premises to infrastructure as a service to platform
as a service to software as a service. This is self-explanatory that uh the resource managed by us are huge in on
premises that towards your left as you watch and it's little less in infrastructure as a service as we move
further towards the right and further reduced in platform as a service and there's really nothing to manage when it
comes to software as a service because we buy the software not any infrastructure component attached to it.
Now let's talk about the life cycle of the cloud computing solution. The very first thing in the life cycle of a
solution or a cloud solution is to get a proper understanding of the requirement. I didn't say get the requirement but
said get a proper understanding of the requirement. It is very vital because only then we will be able to properly
pick the right service offered by the provider. Getting a sound understanding the next thing would be to define the
hardware. Meaning choose the comput service that will provide the right support where you can resize the compute
capacity in the cloud to run application programs. Getting a sound understanding of the requirement helps in picking the
right hardware. One size does not fit all. There are different services and hardwares for different needs you might
have like EC2 if you're looking for is and lambda if you're looking for serverless computing and ECS that
provides containerized servers. So there are a lot of hardwares available. Pick the right hardware that suits your
requirement. The third thing is to define the storage. Choose the appropriate storage service where you
can back up your data and a separate storage service where you can archive your data locally within the cloud or
from the internet and choose the appropriate storage. There is one separately for backup called S3 and
there is one separately for archival that's for glacier. So you know you knowing the difference between them
really helps in picking the right service for the right kind of need. Define the network. Define the network
that securely delivers data, video and applications. Define and identify the network services properly. For example,
VPC for network, route 53 for DNS and direct connection for private P2P line from your office to the AWS data center.
Set up the right security services. IM for authentication and authorization and KMS for uh data encryption at rest. So
there are variety of security products available. We got to pick the right one that suits our need. And there are a
variety of deployment and automation and monitoring tools that you can pick from. For example, cloudatch is for
monitoring. Autoscaling is for being elastic and cloud formation is define the management process and tools. You
can have complete control of your cloud environment if you define the management tools which monitors your AWS resources
and or the custom applications running on AWS platform. There are variety of deployment automation and monitoring
tools you can pick from like cloudatch for monitoring, autoscaling for automation and cloud formation for a
deployment. So knowing them will help you in defining the life cycle of the cloud computing solution properly. And
similarly there are a lot of tools for testing a process like code star and code build and code pipeline. These are
tools with which you can build, test and deploy your code quickly. And finally once everything is said and done pick
the analytics service for analyzing and visualizing the data using the analytics services where we can start quering the
data instantly and get a result. Now if you want to visually view the happenings in your environment you can pick attenna
and other tools for analytics or EMR and which is elastic map reduce and cloud search. Thanks guys. Now we have Samuel
and Rahul to take us through the full course in which they will explain basic framework of Amazon Web Services and
explore all of its important services like EC2, Lambda, S3, AM and cloud formation. We'll also talk about Azure
and some of its popular services. Hello everyone. Let me introduce myself as Sam, a multiplatform cloud architect and
trainer. And I'm so glad and I'm equally excited to talk and walk you through this session about what AWS is and talk
to you about some services and offerings and about how companies get benefited by migrating their applications and infra
into AWS. So what's AWS? Let's talk about that. Now before that let's talk about how life was without any cloud
provider and in this case how life was without AWS. So let's walk back and picture how things were back in 2000
which is not so long ago but lot of changes lot of changes for better had happened since that time. Now back in
2000 a request for a new server is not an happy thing at all because lot of uh money lot of uh validations lot of
planning are involved in getting a server online or up and running. And even after we finally got the server
it's not all said and done. There a lot of optimization that needs to be done on that server to make it worth it and get
a good return on investment from that server and even after we have optimized for a good return on investment the work
is still not done. There will often be a frequent increase and decrease in the capacity and you know even news about
our website getting popular and getting more hits. still an bittersweet experience because now I need to add
more servers to the environment which means that it's going to cost me even more. But thanks to the present-day
cloud technology, if the same situation were to happen today, my new server, it's almost ready and it's ready
instantaneously. And with the swift tools and technologies that Amazon is providing u in provisioning my server
instantaneously and adding any type of workload on top of it and making my storage and server secure you know
creating a durable storage where data that I store in the cloud never gets lost with all that features Amazon has
got our back. So let's talk about what is AWS. There are a lot of definitions for it but u I'm going to put together a
simple and a precise definition as much as possible. Now let me iron that out. Cloud still runs on an hardware. All
right. And uh there are certain features in that infrastructure in that cloud infrastructure that makes cloud cloud or
that makes AWS a cloud provider. Now we get all the services, all the technologies, all the features and all
the benefits that we get in our local data center like you know security and compute capacity and uh databases. And
in fact you know we get even more cool features like uh content caching in various global locations around the
planet. But again out of all the features the best part is that I get or we get everything on a pay as we go
model. The less I use, the less I pay. And the more I use, the less I pay per unit. Very attractive, isn't it? Right.
And that's not all. The applications that we provision in AWS are very reliable because they run on an reliable
infrastructure and it's very scalable because it runs on an ondemand infrastructure and it's very flexible
because of the designs and because of the design options available for me in the cloud. Let's talk about how all this
happened. AWS was launched in uh 2002 after the Amazon we know as the online retail store wanted to sell their
remaining or unused infrastructure as a service or as an offering for customers to buy and use it from them you know
sell infrastructure as a service the idea sort of clicked and uh AWS launched their first product first product in
2006 that's like 4 years after the idea launch and In 2012, they held a big-sized customer event to gather
inputs and concerns from customers and they were very dedicated in making those requests happen. And that habit is still
being followed. It's still being followed as reinvent by AWS. And at 2015, Amazon announced its revenue to be
4.6 billion. And in 2015 through 2016 AWS launched products and services that help migrate customer services into AWS.
Well, they were products even before but this is when a lot of focus was given on developing migrating services and in the
same year that's in 2016 Amazon's revenue was 10 billion and not but not the least as we speak Amazon has more
than 100 products and services available for customers and get benefited from all right let's talk about the uh services
that are available in uh Amazon let's start with this product called S3. Now S3 is an great tool for internet backup
and it's it's the cheapest storage option in the object storage category. And not only that, the data that we put
in S3 is retrievable from the internet. S3 is really cool. And we have other products like migration and data
collection and data transfer products. And here we can not only collect data seamlessly but also in a realtime way
monitor the data or analyze the data that's being received that there are cool products like uh AWS data transfers
available that helps achieve that and then we have products like uh EC2 elastic compute cloud that's an
resizable computer where we can anytime anytime alter the size of the computer based on the need or based on the
forecast. Then we have simple notification services, systems and tools available in Amazon to update us with
notifications through email or through SMS. Now anything anything can be sent through email or through SMS if you use
that service. It could be alarms or uh it could be service notifications if you want stuff like that. And then we have
some security tools like KMS key management system which uses AES 256bit encryption to encrypt our data at rest.
Then we have lambda, a service for which we pay only for the time in seconds. Seconds it takes to execute our code.
And uh we're not paying for the infrastructure here. It's just the seconds the program is going to take to
execute the code. If it's a short program, we'll be paying in milliseconds. If it's a a bit bigger
program, we'll be probably paying in uh 60 seconds or 120 seconds. But that's lot cheap lot simple and lots cost
effective as against paying for service on an hourly basis which a lot of other services are. Well that's cheap but
using lambda is a lot cheaper than that. And then we have services like uh route 53 a DNS service in the cloud and now I
do not have to maintain an DNS account somewhere else and my cloud environment with AWS. I can get both in the same
place. All right, let me talk to you about um how AWS makes life easier or how companies got benefited by using AWS
as their IT provider for their applications or for the infrastructure. Now, Uni liver is a company and um they
had a problem right and they had a problem and they picked AWS as a solution to their problem right now.
This company was sort of spread across 190 countries and they were relying on a lot of digital marketing for promoting
their products and their existing environment, their legacy local environment proved not to support their
changing IT demands and they could not standardize their old environment. Now they chose to move part of their
applications to AWS because they were not getting what they wanted in their local environment. And since then you
know rollouts were easy, provisioning your applications became easy and even provisioning infrastructure became easy
and they were able to do all that in push button scaling and uh needless to talk about uh backups that are safe and
backups that can be securely accessed from the cloud as needed. Now that company is growing along with AWS
because of their swift speed in rolling out deployments and uh being able to access secure backups from various
places and generate reports and in fact useful reports out of it that helps their business. Now on the same lines
let me also talk to you about Kelloggs and how they got benefited by using Amazon. Now Kelloggs had a different
problem. It's one of its kind. Now their business model was very dependent on uh an infer that will help to analyze data
really fast right because they were running promotions based on the analyzed data that they get. So they being able
to respond to the analyzed data as soon as possible was critical or vital in their environment and luckily SAP
running on Hannah environment is what they needed and uh you know they picked that service in the cloud and that sort
of solved the problem. Now the company does not have to deal with maintaining their legacy infra and maintaining their
heavy compute capacity and maintaining their database locally. All that is now moved to the cloud or they are using
cloud as their IT service provider and and now they have a greater and powerful IT environment that very much
complements their business. Hi there, I'm Samuel, a multiplatform cloud architect and I'm very excited and
honored to walk you through this learning series about AWS. Let me start the session with this scenario. Let's
imagine how life would have been without Spotify. For those who are hearing about Spotify for the first time as Spotify is
an online music service offering and it offers instant access to over 16 million licensed songs. Spotify now uses AWS
cloud to store the data and share it with their customers. But prior to AWS, they had some issues. Imagine using
Spotify before AWS. Let's talk about that. Back then, users were often getting errors because Spotify could not
keep up with the increased demand for storage every new day. And that led to users getting upset and users cancelling
the subscription. The problem Spotify was facing at that time was their users were present globally and were accessing
it from everywhere and uh they had different latency in their applications and Spotify had a demanding situation
where they need to frequently catalog the songs released yesterday, today and in the future. And this was changing
every new day and the songs coming in rate was about 20,000 a day and back then they could not keep up with this
requirement and needless to say they were badly looking for way to solve this problem and that's when they got
introduced to AWS and it was a perfect fit and match for their problem. AWS offered a dynamically increasing storage
and that's what they needed. AWS also offered tools and techniques like storage life cycle management and
trusted advisor to properly utilize the resource so we always get the best out of the resource used. AWS addressed
their concerns about easily being able to scale. Yes, you can scale the AWS environment very easily. How easily, one
might ask? It's just a few button clicks. And AWS solved Spotify's problem. Let's talk about how it can
help you with your organization's problem. Let's talk about what is AWS first and then let's bleed into how AWS
became so successful and the different types of services that AWS provides and what's the future of cloud and AWS in
specific. Let's talk about that and finally we'll talk about a use case where you will see how easy it is to
create a web application with AWS. All right, let's talk about what is AWS. AWS or Amazon Web Services is a secure cloud
service platform. It is also pay as you go type billing model where there is no upfront or capital cost. We'll talk
about how soon the service will be available. Well, the service will be available in a matter of seconds. With
AWS, you can also do identity and access management that is authenticating and authorizing a user or a program on the
fly. And almost all the services are available on demand and most of them are available instantaneously. And as we
speak, Amazon offers 100 plus services and this list is growing every new week. Now that would make you wonder how AWS
became so successful. Of course, it's their customers. Let's talk about the list of well-known companies that has
their IT environment in AWS. Adobe. Adobe uses AWS to provide multi-terabte operating environments for its
customers. By integrating its system with AWS cloud, Adobe can focus on deploying and operating its own software
instead of trying to, you know, deploy and manage the infrastructure. Airbnb is another company. It's an community
marketplace that allows property owners and travelers to connect each other for the purpose of renting unique vacation
spaces around the world. And uh the Airbnb community users activities are conducted on the website and through
iPhones and Android applications. Airbnb has a huge infrastructure in AWS and they're almost using all the services in
AWS and are getting benefited from it. Another example would be Autodesk. Autodesk develops software for
engineering, designing and entertainment industries using services like Amazon RDS or relational database service and
Amazon S3 or Amazon simple storage service. Autodesk can focus on deploying or developing its machine learning tools
instead of spending that time on managing the infrastructure. AOL or American online uses AWS and using AWS
they have been able to close data centers and decommission about 14,000 in-house and colloccated servers and
move mission critical workload to the cloud and extend its global reach and save millions of dollars on energy
resources. Bit Defender is an internet security software firm and their portfolio of softwares include antiirus
and anti- spyware products. Bit Defender uses EC2 and they're currently running few hundred instances that handle about
5 terabytes of data and they also use elastic load balancer to load balance the connection coming in to those
instances across availability zones and they provide seamless global delivery of service. Because of that, the BMW group,
it uses AWS for its new connected car application that collects sensor data from BMW 7 series cars to give drivers
dynamically updated map information. Canons offers imaging products division benefits from faster deployment times,
lower cost, and global reach by using AWS to deliver cloud-based services such as mobile print. The office imaging
products division uses AWS such as Amazon S3 and Amazon RA 53, Amazon CloudFront and Amazon IM for their
testing, development, and production services. Comcast, it's the world's largest cable company and the leading
provider of internet service in the United States. Comcast uses AWS in a hybrid environment. Out of all the other
cloud providers, Comcast chose AWS for its flexibility and scalable hybrid infrastructure. Docker is a company
that's helping redefine the way developers build, ship, and run applications. This company focuses on
making use of containers for this purpose. And in AWS, the service called Amazon EC2 container service is helping
them achieve it. The ESA or European Space Agency. Although much of ESA's work is done by satellites, some of the
programs, data, storage, and computing infrastructure is built on Amazon Web Services. ESA chose AWS because of its
economical pay as you go system as well as its quick startup time. The Guardian newspaper uses AWS and it uses a wide
range of AWS services including Amazon Kinesis, Amazon Redshift that power an analytic dashboard which editors use to
see how stories are trending in real time. Financial Times FT is one of the world's largest leading business news
organization and they used Amazon Redshift to perform their analysis. A funny thing happened. Amazon Redshift
performed so quickly that some analysis thought it was malfunctioning. They were used to running queries overnight and
they found that the results were indeed correct just as much faster. By using Amazon Red Shift, FD is supporting the
same business functions with costs that are 80%age lower than what was before. General Electric GE is at the moment, as
we speak, migrating more than 9,000 workloads, including 300 desperate ERP systems to AWS while reducing its data
center footprint from 34 to 4 over the next 3 years. Similarly, Harvard Medical School, HTC, IMDb, McDonald's, NAZA,
Kelloggs and lot more are using the services Amazon provides and are getting benefited from it. And this huge success
and customer portfolio is just the tip of the iceberg. And if we think why so many adapt AWS and if we let AWS answer
that question, this is what AWS would say. People are adopting AWS because of the security and durability of the data
and end-to-end privacy and encryption of the data and storage experience. We can also rely on AWS way of doing things by
using the AWS tools and techniques and suggested best practices built upon the years of experience it has gained.
Flexibility. There is a greater flexibility in AWS that allows us to select the OS language and database.
Easy to use. Swiftness in deploying. We can host our applications quickly in AWS be a new application or migrating an
existing application into AWS. Scalability. The application can be easily scaled up or scaled down
depending on the user requirement. Costsaving. We only pay for the compute power, storage and other resources you
use and that to any long-term commitments. Now let's talk about the different types of services that AWS
provides. The services that we talk about fall in any of the following categories you see like you know compute
storage database security customer engagement desktop and streaming machine learning developers tools stuff like
that and if you do not see the service that you're looking for it's probably is because AWS is creating it as we speak
now let's look at some of them that are very commonly used within computer services we have Amazon EC2 Amazon
Elastic Beantock Amazon light sale and Amazon Lambda Amazon EC2 provides compute capacity in the cloud. Now this
capacity is secure and it is resizable based on the user's requirement. Now look at this. The requirement for the
web traffic keeps changing and behind the scenes in the cloud EC2 can expand its environment to three instances and
during no load it can shrink its environment to just one resource. Elastic beantock it helps us to scale
and deploy web applications and it's made with a number of programming languages. Elastic beantock is also an
easytouse service for deploying and scaling web applications and services deployed be it in java.net NET, PHP,
NodeJS, Python, Ruby, Docker and lot other familiar services such as Apache, Passenger and IIS. We can simply upload
our code and elastic beantock automatically handles the deployment from capacity provisioning to load
balancing to autoscaling to application health monitoring and Amazon lights is a virtual private server which is easy to
launch and easy to manage. Amazon lights is the easiest way to get started with AWS for developers who just need a
virtual private server. Light includes everything you need to launch your project quickly on a virtual machine
like SSD based storage, a virtual machine, tools for data transfer, DNS management and a static IP and that too
for a very low and predictable price. AWS Lambda has taken cloud computing services to a whole new level. It allows
us to pay only for the compute time. No need for provisioning and managing servers. And AWS Lambda is a compute
service that lets us run code without provisioning or managing servers. Lambda executes your code only when needed and
scales automatically from few requests per day to thousands per second. You pay only for the compute time you consume.
There is no charge when your code is not running. Let's look at some storage services that Amazon provides like
Amazon S3, Amazon Glacier, Amazon EBS, and Amazon Elastic File System. Amazon S3 is an object storage that can store
and retrive data from anywhere. Websites, mobile apps, IoT sensors, and so on can easily use Amazon S3 to store
and retrive data. It's an object storage built to store and retrive any amount of data from anywhere. With its features
like flexibility in managing data and the durability it provides and the security that it provides, Amazon simple
storage service or S3 is a storage for the internet. and Glacier. Glacier is a cloud storage service that's used for
archiving data and long-term backups. And this Glacier is an secure, durable, and extremely lowcost cloud storage
service for data archiving and long-term backups. Amazon EBS, Amazon Elastic Block Store provides block store volumes
for the instances of EC2. And this elastic block store is highly available and a reliable storage volume that can
be attached to any running instance that is in the same availability zone. ABS volumes that are attached to the EC2
instances are exposed as storage volumes that persistent independently from the lifetime of the instance and Amazon
elastic file system or EFS provides an elastic file storage which can be used with AWS cloud service and resources
that are on premises and Amazon elastic file system it's an simple it's scalable it's an elastic file storage for use
with Amazon cloud services and for on premises resources it's easy to use and offers offers a simple interface that
allows you to create and configure file systems quickly and easily. Amazon file system is built to elastically scale on
demand without disturbing the application growing and shrinking automatically as you add and remove
files. Your application have the storage they need and when they need it now let's talk about databases. The two
major database flavors are Amazon RDS and Amazon Redshift. Amazon RDS it really eases the process involved in
setting up operating and scaling a relational database in the cloud. Amazon RDS provides costefficient and resizable
capacity while automating time consuming administrative tasks such as hardware provisioning, database setup, patching
and backups. It sort of frees us from managing the hardware and sort of helps us to focus on the application. It's
also cost effective and resizable and it's also optimized for memory performance and input and output
operations. Not only that, it also automates most of the services like taking backups, you know, monitoring
stuff like that. It automates most of those services. Amazon Redshift. Amazon Redshift is a data warehousing service
that enables users to analyze the data using SQL and other business intelligent tools. Amazon red shift is an fast and
fully managed data warehouse that makes it simple and cost-effective analyze all your data using standard SQL and your
existing business intelligent tools. It also allows you to run complex analytic queries against pabyte of structured
data using sophisticated query optimizations and most of the results they generally come back in seconds. All
right, let's quickly talk about some more services that AWS offers. There are a lot more services that AWS provides,
but we're going to look at some more services that are widely used. AWS application discovery services help
enterprise customers plan migration projects by gathering information about their on- premises data centers. You
know, planning a data center migration can involve thousands of workloads. They are often deeply interdependent. Server
utilization data and dependency mapping are important early first step in migration process. And this AWS
application discovery service collects and presents configuration, usage, and behavior data from your servers to help
you better understand your workloads. Route 53, it's a network and content delivery service. It's an highly
available and scalable cloud domain name system or DNS service. And Amazon Route 53 is fully compliant with IPv6 as well.
Elastic load balancing, it's also a network and content delivery service. Elastic load balancing automatically
distributes incoming application traffic across multiple targets such as Amazon EC2 instance containers and IP
addresses. It can handle the varying load of your application traffic in a single availability zones and also
across availability zones. AWS autoscaling it monitors your application and automatically adjusts the capacity
to maintain steady and predictable performance at a lowest possible cost. Using AWS autoscaling, it's easy to set
up application scaling for multiple resources across multiple services in minutes. Autoscaling can be applied to
web services and also for DB services. AWS identity and access management. It enables you to manage access to AWS
services and resources securely using IM. You can create and manage AWS users and groups and use permissions to allow
and deny their access to AWS resources. And moreover, it's a free service. Now, let's talk about the future of AWS.
Well, let me tell you something. Cloud is here to stay. Here's what in store for AWS in the future. As years pass by,
we're going to have a variety of cloud applications born like IoT, artificial intelligence, business intelligence,
serverless computing and so on. Cloud will also expand into other markets like healthcare, banking, space, automated
cars and so on. As I was mentioning some time back, lot or greater focus will be given to artificial intelligence and
eventually because of the flexibility and advantage that cloud provides, we're going to see lot of companies moving
into the cloud. All right, let's now talk about how easy it is to deploy an web application in the cloud. So the
scenario here is that our users like a product and we need to have a mechanism to receive input from them about their
likes and dislikes and uh you know give them the appropriate product as per their need. All right. Though the setup
and the environment it sort of looks complicated. We don't have to worry because AWS has tools and technologies
which can help us to achieve it. Now we're going to use services like Route 53, services like Cloudatch, EC2, S3 and
lot more. And all these put together are going to give an application that's fully functionable and uh an application
that's going to receive the information uh like using the services like route 53, cloudatch, EC2 and S3. We're going
to create an application and that's going to meet our need. So back to our original requirement, all I want is to
deploy a web application for a product that keeps our users updated about the happenings and the new comingings in the
market. And to fulfill this requirement, here is all the services we would need. EC2 here is used for provisioning the
computational power needed for this application and EC2 has a vast variety of family and types that we can pick
from for the types of workloads and also for the intents of the workloads. We're also going to use S3 for storage and S3
provides any additional storage requirement for the resources or any additional storage requirement for the
web applications. And we're also going to use cloudatch for monitoring the environment and cloudatch monitors the
application and the environment and it uh provides trigger for scaling in and scaling out the infrastructure. And
we're also going to use route 53 for DNS and route 53 helps us to register the domain name for our web application. And
with all the tools and technologies together, all of them put together, we're going to make an application, a
perfect application that caters our need. All right. So, I'm going to use Elastic Beantock for this project. And
the name of the application is going to be, as you see, GSG signup. And the environment name is GSG signup
environment one. Let me also pick a name, right? Let me see if this name is available. Yes, that's available. That's
the domain name. So, let me pick that. And the application that I have is going to run on NodeJS. So let me pick that
platform and launch. Now as you see elastic beanto this is going to launch an instance. It's going to launch u the
monitoring setup or the monitoring environment. It's going to create a load balancer as well and it's going to take
care of all the security features needed for this application. All right, look at that. I was able to
go to that URL which is what we gave and it's now having an default page shown up meaning all the dependencies for the
software is installed and it's just waiting for me to upload the code or in specific the page required. So let's do
that. Let me upload the code. I already have the code saved
here. That's my code and that's going to take some time. All right, it has done its thing. And now if I go to the same
URL, look at that. I'm being thrown an advertisement page. All right, so if I sign up with my name, email, and stuff
like that. you know, it's going to receive the information and it's going to send an email to the owner saying
that somebody had subscribed to your service. That's the default feature of this app. Look at that email to the
owner saying that somebody had subscribed to your app and this is their email address, stuff like that. Not only
that, it's also going to create an entry in the database. And DynamoB is the service that this application uses to
store data. There's my Dynamo DB. And if I go to tables, all right, and go to items, I'm going to see that a user with
name Samuel and email address so and so has said okay or has shown interest in the preview of my site or product. So
this is where or this is how I collect those information. Right? And some more things about the infrastructure itself
is it is running behind an load balancer. Look at that. It had created a load balancer. It had also created an
autoscaling group. Now that's the feature of elastic load balancer that we have chosen. It has created an
autoscaling group. And now let's put this URL. You see this it's it's not a fancy URL. All right. It's an Amazon
given URL, a dynamic URL. So let's put this URL behind our DNS. Let's do that. So go to services, go to route
53, go to hosted zone, and there we can find the DNS name. Right. So that's a DNS name. All right. All
right, let's create an entry and map that URL to our load balancer. Right, and create. Now,
technically, if I go to this URL, it should take me to that application. All right, look at that. I went to my custom
URL and now that's pointed to my application. Previously my application was having a random URL and now it's
having a custom URL. So what did we learn? We started the session with what is AWS. We looked at features and tools,
technologies, products that AWS provides and we also looked at how AWS became very successful. Again we looked into
the benefits and features of AWS in depth and we also looked at some of the services that AWS provides in random and
then we picked particular services and we talked about them like EC2 elastic beanto light sale lambda storage stuff
like that. Then we also looked at the future of AWS what AWS holds in the store for us. We looked at that and then
finally we looked at a lab in which we created an application using elastic beanto and all that we had to do was a
couple of clicks and boom an application was there available that was connected to um the database and that was
connected to the simple notification system that was connected to cloudatch that was connected to storage stuff like
that what is Azure what's the big cloud service provider all about so Azure is a cloud computing platform provided by
Microsoft Now it's basically an online portal through which you can access and manage resources and services. Now
resources and services are nothing but you know you can store your data and you can transform the data using services
that Microsoft provides. Again all you need is the internet and being able to connect to the Azure portal. Then you
get access to all of the resources and their services. In case you want to know more about how it's different from its
rival which is AWS, I suggest you click on the top right corner and watch the AWS versus Azure video so that you can
clearly tell how both these cloud service providers are different from each other. Now, here are some things
that you need to know about Azure. It was launched in February 1st, 2010, which is significantly later than when
AWS was launched. It's free to start and has a pay-per-use model which means like I said before you need to pay for the
services you use through Azure and one of the most important selling points is that 80% of Fortune 500 companies use
Azure services which means that most of the bigger companies of the world actually recommend using Azure and then
Azure supports a wide variety of programming languages the C nojs Java and so much more. Another very important
selling point of Azure is the amount of data centers it has across the world. Now it's important for a cloud service
provider to have many data centers around the world because it means that they can provide their services to a
wider audience. Now Azure has 42 which is more than any cloud service provider has at the moment. It expects to have 12
more in a period of time which brings its total number of regions it covers to 54. Now let's talk about Azure services.
Now, Azure services have 18 categories and more than 200 services. So, we clearly can't go through all of them. It
has services that cover compute, a machine learning, integration, management tools, identity, DevOps, web,
and so much more. You're going to have a hard time trying to find a domain that Azure doesn't cover. And if it doesn't
cover it now, you can be certain they're working on it as we speak. So, first, let's start with the compute services.
First, virtual machine. With this service, what you're getting to do is to create a virtual machine of Linux or
Windows operating system. It's easily configurable. You can add RAM, you can decrease RAM, you can add storage,
remove it. All of it is possible in a matter of seconds. Now, let's talk about the second service cloud service. Now,
with this you can create a application within the cloud and all of the work after you deploy it. deploying the
application that is is taken care of by Azure which includes you know provisioning the application, load
balancing, ensuring that the application is in good health and all of the other things are handled by Azure. Next up,
let's talk about service fabric. Now with service fabric, the process of developing a micros service is greatly
simplified. So you might be wondering what exactly is a micros service? Now a micros service is basically an
application that consists of smaller applications coupled together. Next up, functions. Now, with functions, you can
create applications in any programming language that you want. Another very important part is that you don't have to
worry about any hardware components. You don't have to worry what RAM you require or how much storage you require. All of
that is taken care of by Azure. All you need is to provide the code to Azure and it'll execute it and you don't have to
worry about anything else. Now, let's talk about some networking services. First up we have Azure CDN or the
content delivery network. Now the Azure CDN service is basically for delivering web content to users. Now this content
is of high bandwidth and can be transferred or can be delivered to any person across the world. Now these are
actually a network of servers that are placed in strategic positions across the world so that the customers can obtain
this data as fast as possible. Next up we have express route. Now with this you can actually connect your on-premise
network onto the Microsoft cloud or any of the services that you want through a private connection. So the only
communication that happens is between your on-premise network and the service that you want. Then you have virtual
network. Now with virtual network you can have any of the Azure services communicate with each other in a secure
manner in a private manner. Next we have Azure DNS. So Azure DNS is a hosting service which allows you to host their
DNS or domain name system domains in Azure. So you can host your application using Azure DNS. Now for the storage
services. First up we have disk storage. With this storage you're given a cost-effective option of choosing HDD or
solidstate drives to go along with your virtual machines based on your requirements. Then you have blob
storage. Now this is actually optimized to ensure that they can store massive amounts of unstructured data which can
include text data or even binary data. Next you have file storage which is a managed file storage and can be
accessible via the SMB protocol or the server message block protocol. And finally you have Q storage. Now with Q
storage you can provide durable message queuing for an extremely large workload. And the most important part is that this
can be accessed from anywhere in the world. Now let's talk about how Azour can be used. Firstly for application
development. It could be any application mostly web applications. Then you can test the application see how well it
works. You can host the application on the internet. You can create virtual machines. Like I mentioned before with
the service you can create these virtual machines of any size or RAM that you want. You can integrate and sync
features. You can collect and store metrices. For example, how the data works, how the current data is, how you
can improve upon it. All of that is possible with these services and you have virtual hard drives which is an
extension of the virtual machines where these services are able to provide you a large amount of storage where data can
be stored. Talk about Azure in great length and breadth and if you're looking for a video that talks and walks you
through all the services in Azure then this could be one of the best video you could find in the internet. And without
any further delay let's get started. Everybody likes stories it. So let's get started with a story. In a city not so
far away, a CEO had plans to expand his company globally and called one of his IT personnel for an IT opinion. And this
guy has been in the company for a long time and is very seasoned with the company's infra and he nicely answered
the questions with what he foresaw and he said I have a good news and a bad news for us to go global. And he starts
with the good news. He said, "Sir, we're well on our way to become one of the world's largest shipping company." And
the bad news is, however, our data centers have almost run out of space and setting up new ones around the world
would be too expensive and very timeconuming. Now, the IT personnel, let's call him Mike. Now, he explains
the situation from how he saw it. But the CEO had done some homework about how he was going to do it and he answered
Mike saying, "Don't worry about that, Mike. I've come up with a solution for a problem and it's called Microsoft
Azure." Well, Mike is an hardworking and honest IT professional working for that company, but he did not spend time on
learning the latest technologies and he asked this question very honestly. Oh, how does it solve a problem? And the CEO
begins to explain Azure to Mike and he starts with what is cloud computing and then he goes on and talks about Azure
and the services offered by Azure and why Azure is better than the other cloud providers and what are the great
companies that uses Azure and how they got benefited out of it and then he winds it all up with the use cases of
Azure. So he begins his explanation saying Microsoft Azure is known as the cloud service provider and it works on
the basis of cloud computing. Now Microsoft Azure is formerly known as Windows Azure and it's uh Microsoft's
public cloud computing platform. It also provides a range of cloud services including some of them are compute
analytics storage and networking. We can always pick and choose from these services to develop and scale our
applications or even plan on running existing applications in the public cloud. Microsoft Azure is both a
platform as a service and infrastructure as a service. Let's now fit their conversation out and let's talk about
what is cloud computing Azure services offered by Azure. How is Azure leading when compared to other cloud service
providers and what are the companies that are using Azure? Let's talk about that. In simple terms, cloud computing
is being able to access compute services like servers, storage, database, networking, software analytics,
intelligence and lot more over the internet which is the cloud. with the uh flexibility of the resources that we use
like anytime I want a resource I can use one and it becomes available immediately and anytime if I want to retire an
resource I can simply retire a resource and not pay for it and we also typically pay only for the services that we use
and this helps greatly with our operating cost to run our infrastructure more efficiently and scale our
environment up or down depending on the business needs and changes. And all the servers and stoages and databases and
networking all that are accessed through the network of remote systems or remote computers hosted in the internet
typically in the provider's data center which is Azure in this case. Now we don't use any physical server or an
onremises server here. Well, we still use physical servers and VMs, you know, hosted on a hardware or a physical
server, but they're all in the provider environment and none of them sit on premises or in our data center. We only
access them remotely. It looks and feels the same except for the fact that they are in a remote location. We access them
remotely, do all the work remotely, and when we're done, we can shut it down and not pay for them. So some of the use
cases some of the use cases of cloud computing are creating applications and services. The other use cases are
storing or using cloud for storage alone. If there is one thing that ever grows in an organization is the storage.
Every new day there is a new storage requirement and it's very dynamic. It's very hard to predict. And if we go out
and buy a big storage capacity up front until we use the storage capacity fully, the empty stoages, you know, we're
wasting money on them. So instead, I can go for a storage which scales dynamically that's in the cloud. Put
storage or put data in the cloud and pay only for what you're storing. And for the next month, if you have deleted or
flushed out some files or data, pay less for it. So it's a very dynamic storage in the cloud and a lot of companies are
getting benefited from storing data in the cloud because of its u dynamic in nature and the cost that comes along
with it the cheap cost that comes along with it and also they give a lot of the providers like Azure they give uh data
replication for free they promise an SLA along with the data we store in the cloud so there's an SLA attached to it
and they also also provide data recoveries as well. If in case something goes wrong with the physical disk where
our data is stored, Azure automatically makes our data available from the redundant or other places where it had
stored our data because of the SLA they wanted to keep. The other use case for Azure is hosting websites and running
blogs using the compute service. Be it storing music and letting your users stream the music, Azure is a good place
to store music and stream the music with the benefit of CDN content delivery network which allows us to stream video
or audio files with great speed. You know with that with Azure our audio or video application works seamlessly
because they are provided to the client with very low latency and that improves the customer experience for our
application. Azure comput service is a good place for delivering software on demand. There are a lot of softwares
embedded softwares that we can buy using Azure and everything on a pay as you go service model. So anytime we need a
software, we can go out and immediately buy the software for the next 1 hour or 2 hour let's say and use them and then
return it back. We're not bound to any yearly licensing cost by that. Azure computing services has analytic
available for us with which we can analyze get a good visualization of what's going on in a network be logs be
the performance be the metrics you know instead of looking at logs and searching logs and trying to do manual things over
the heaps and heaps of logs that we have saved Azure Analytics Services helps us to get a good visual of What's going on
in the network? Where have we dropped? Where have we increased or what's causing what's the major driver? What is
the top 10 errors that we get in the server in the application? Stuff like that. Those can be easily gathered from
the Azure analytic services. Now cloud is really a very cool term for the internet. A good uh analogy would be
looking back. Anytime we look at a diagram when we do not know how things are transferred, we simply draw a cloud.
Right? For example, a mail gets sent from a person in one country to a person in the other country. A lot of things
happening in between from the time you hit the send button and the time the other person hits the read button.
Right? And we the simple and the easiest way of putting it in a picture is simply draw a cloud and on the one end one
person will be sending the email and on the other end the other person will be reading the email. So a cloud is a
really cool term for the internet. Now that's some basics about cloud computing. Now that we've understood
about cloud computing in general, let's talk about Microsoft Azure as a cloud service. Now, Microsoft Azure is a set
of cloud services to build, manage, and deploy applications on a network with the help of Microsoft Azure's
frameworks. Now, Microsoft Azure is a computing service created by Microsoft basically for building, testing,
deploying, and managing applications and services through a global network of Microsoft managed data centers. Now,
Microsoft Azure provides SAS which is software as a service and PAS which is platform as a service and IAS
infrastructure as a service and they support many different programming languages tools and framework and those
tools and framework include both Microsoft specific and third party software. Now let me pick and talk about
a specific service for example management. Azure automation provides a way for us to automate the manual long
running and frequently repeated task that are commonly performed task both in cloud and enterprise environment. It
saves us a lot of time and increases the reliability and it kind of gives a good administrative control and even
schedules the task automatically to be performed on a regular basis. To give you a quick history of Microsoft Azure,
it was launched on 1st February 2010 and it was awarded or it was called an industry leader for infrastructure and
platform as a service by Gartner. Now Gartner is the world's leading research and advisory company. This Microsoft
Azure supports a number of programming languages like C, Java and Python. All these cool services we get to use and
pay only for how much we use. For example, if we use for an hour, we only get to pay for an hour. Even the
costliest system available. And if we use them for an hour, we only pay for that particular hour. And then we're
done. No more billing on the resource that we have used. Microsoft Azure has spread itself more than 50 regions
around the world. So it's quite easy for us to pick a region and you know start provisioning and running our
applications probably from day one because the infrastructure and the tools and technologies needed to run our
application are already available. All that we have to do is commit the code in that particular region or build an
application and launch it in that particular region and they become live starting day one. Now because we have 50
regions around the world, we can very carefully design our environment to provide low latency services to our
customers. All right? Instead of in traditional data center let's say you know customers will have to or their
request will have to travel all the way around the globe to reach a data center which lives in the other side of the
planet and this adds more latency to it and it is really not feasible to build a data center uh near each customer
location because of the cost involved but with Azure it's possible. Azure already has data centers around the
world and all that we have to do is just pick a data center, build an environment there. They're available starting day
one. Number one, and also the cost is considerably saved because we are using a public cloud instead of an physical
infrastructure to serve those customers from a very local location. And the services that Azure is offering is ever
increasing. As of now, as we speak, we have like 200 plus services offered and uh they span through different domain or
different platform or different technologies available within the Azure console portal. Now, we're going to talk
about that later in this section. So, hold your breath till we talk about it. But for now, just know that we have like
200 plus services offered by Azure. Let's now talk about different services in Azure. Starting with artificial
intelligence plus machine learning where we have a lot of tools and technologies. So the wide variety of
services available in Azure includes artificial intelligence plus machine learning plus analytic services to get
an or to give us a good visual of how the data or how the application is performing or the type of the category
of data stored and to read from the logs. and variety of compute services, different VMs with different size and
different operating systems, different containers available, different type of databases available, a lot of developer
tools that are available for us and identity service to manage our users in the Azure cloud and those users can be
integrated or federated with let's say Google, Facebook, you know, LinkedIn. So there are some external federation
services they can be used to integrate with our identity system IoT's IoT services IoT tools and technologies
available and management tools to manage the users you know creating identity is one and then managing them on top of it
is a totally different thing and we have tools technologies to manage the uh users cool services for data migration
data migration is now made simple tools and technologies available for mobile application uh development and I can
plan my own network in the cloud with the networking services I can implement my own security both Azure provided and
third party security services on Azure cloud that's now possible and lot of storage options available in the cloud
so these are just a glimpse of the big list of services available in Azure cloud
So that was a glimpse of what's available in the cloud. Let's talk about the services in a specific. Let's take
compute for example. You know whenever we're building a new application or deploying existing ones. The Azure
compute service provides the infrastructure we need to run and maintain our application. We can easily
tap in the capacity that Azure cloud service has and we can scale our compute requirement on demand. We can also
containerize our application. We have the option of choosing Windows or Linux v machine and take the advantage of the
flexible options Azure provides for us to migrate our VMs to Azure and lot more. And these compute services also
include a full-fledged identity solution meaning integration with active directory in the cloud or in on premises
and lot more. Let's look at some of the services that this compute domain provides. Some of the services the
compute domain provides are virtual machines. And this Azure virtual machines gives us the ability to develop
and manage a virtual computer environment or a virtualized environment inside Azure's cloud environment that in
a virtual private network. Now we will talk about virtual private network at a later point but as of now just uh know
that there are a lot of services available in Azure compute service that we can get benefited from. We can always
choose from a very wide range of uh compute options. For example, you know we have an option to choose the
operating system. We have the option to choose whether the system should be in on premises or in the cloud or do we
want to maintain the environment both in on premises and in the cloud. we have the option of choosing the operating
system whether we want to use our own operating system with some software attached uh to it or do we want to go
and buy the operating system from the cloud from Azure marketplace and these are just a few of the options available
for us when we want to buy the compute environment and these compute environments are easily scalable meaning
we can easily scale our VM instances from one instance to thousands thousands of virtual machines in a matter of
minutes or simply put in a couple of button clicks and all these services are available on a pay for what we use
model. Meaning there is no upfront cost. We use the service and then pay for the services that we have used. There's no
literal long-term commitment when it comes to using virtual machines in the cloud. And these most of the services
are built on a pay-per-minut billing basis. All right. And at no point because of the pay-per-
minute billing model, at no point we will be overpaying for any of the services. That's that's attractive,
isn't it? Now, let's talk about batch service. Now, batch service is always uh independent. Regardless of whether you
choose Windows or Linux, it's going to run fairly well. And with batch service we can take advantage of the uh
environment's unique features and not only that in short the batch service helps us to manage the whole batch
environment and also it helps to schedule the jobs. Now this Azure batch service is actually runs on a large
scale parallel and high performance computing. Because of that batch jobs are highly efficient in Azure. And when
we run batch services, this Azure batch creates a pool of computer nodes and uh installs the needed applications that we
want to run and then it schedules jobs to those individual nodes in those pools. As a customer, there is no need
for us to install a cluster or there is no need for us to install a software that actually schedules the jobs or even
to manage or even to scale those infrastructure or the uh software because everything is managed by Azure.
And this batch service is a platform as a service. There is no additional charge for using this batch service except for
I mean the only charges that we'll be paying is for the virtual machines that this service uses and uh the storage
that we will be using of course and uh the networking services that we will be using for this batch service. Let's
summarize this batch service. We have a choice of operating system that we can pick and use and it scales by itself.
Now the alternative for the batch would be cues but in cues we'll have to pre-provision and pay for the
infrastructure even if we're not using it but with a batch we only pay for what we use and this batch service helps us
to manage uh the application manage the scheduleuling as a whole as if they are just one thing as next thing in compute
domain let's talk about this fabric service now this fabric service is actually a distributed system platform
that helps us to package, deploy and manage a scalable and a very reliable micros service and containers. And what
does it help? This Azure fabric service helps us or it helps the developers and administrators so they can avoid the
complex infrastructure problems and they can focus only on implementing workloads or taking care of their development
taking care of their application instead of spending time on infrastructure. So what's service fabric? service fabric.
It provides runtime capabilities and uh life cycle management to applications that are composed of microservices. No
infrastructure management at all. And with service fabric, we can easily scale the application to tens or hundreds or
even to thousands of machines. Here machines represent containers. As next thing in compute domain, let's talk
about virtual machine scale set. Now this virtual machine scale set it lets us to create a group of identical load
balanced VMs. I just want to mention it again. It helps us to manage a group identical and load balanced VMs. The
number of instances or the number of VM instances in an in a scale set can increase or decrease in response to uh
the demand or in response to a schedule that we define. you know the resources needed on a Monday morning is not the
same as that would be required on a Saturday or a Sunday morning. All right. And even within the day the resources
that would be needed in the beginning of the business hour is not the resources that would be needed at noon or you know
after 8 or 9 in the evening. So the demands could actually vary in the environment and the skill set helps us
to take care of the varying demand or take care of the uh different infrastructure requirement at a
different schedule throughout the day throughout the week throughout the month or could be throughout the year as well.
The scale set also allows us to provide high availability to our applications and it helps us to uh centrally manage
configure and update a large number of VMs as if they they are just one thing. Now you might ask well virtual machines
are enough. Why would we need a virtual machine scale set? Just like I said this virtual machine skill set helps us uh
with uh a greater redundancy and improved performance for our applications and those applications can
be accessed through a load balancer that actually distributes uh the requests to the application instances. So in a
nutshell this virtual machine scale set it helps us to create a large number of identical virtual machines. number one.
And with scale set, we can increase or decrease the virtual machines. With virtual machine scale set, we can
centrally manage and configure and update a big group of VMs. And it's a great use case when it comes to big data
or container workloads. As next thing in compute domain, uh let's talk about cloud services. Now, this Azure cloud
service is actually a platform as a service and it's very friendly. In fact, it is designed for applications that
support scalability or an application that requires scalability or reliability and and on top of it, you want them to
be very inexpensive to operate. So, Azure cloud service provides all these. So, where would this cloud service run?
Well, it runs on a VM, but it's a platform as a service. VMs are infrastructure as a service. And when we
run applications on VM through cloud service, it becomes platform as a service. So here is how you got to be
thinking with infrastructure as a service like VMs. We first create and configure the environment and then we
run applications on top of it. Let's look at the responsibility. The responsibility for us in VM is that we
manage everything end to end like uh you know deploying new patches, picking the versions of the operating system and
making sure they are uh intact and all that stuff. It's all managed by us. But on the contrary with platform as a
service it's I mean it's as if the environment is already ready. All that you have to do is deploy your
application in it and manage the platform. I mean manage the platform not as an administrator because all the
administration is taken care by Azure like uh you know deploying new versions of the operating system. It's all
handled by the Azure. So we deploy the application and we manage the application. That's it. infrastructure
management is handled by Azure. So what does cloud service provide? This cloud service provides a platform uh where we
can uh write the uh application code and we don't have to worry about hardware. Simply hand over the code and cloud
service takes care of it. So no worry on the hardware at all. So responsibilities like patching, what do we do if
something uh crashes, how do I update the infrastructure, how do I uh manage uh the maintenance or the downtime in
the underlying infrastructure. All that is handled by Azure. It also provides an good testing environment for us. You
know, we can simply run the code, test it before it's actually released to the production. I want to expand a bit on
these testing applications. So this Azure cloud service it actually gives us an staging environment for testing a new
release without it affecting the existing release which actually reduces the customer downtime. So we can run the
application, test it, and anytime that's ready for production. All that's needed for us to do to move it to production is
simply to swap the staging environment into the production environment and the old production environment will now
become the new staging environment where we can uh add more to it and then swap it back at a later point. So it it kind
of gives us an swappable environment for testing our applications. And not only that, it gives us health monitoring
alerts. It helps us to monitor the health and availability of our application. There is a dashboard we can
benefit from uh when we use Azure cloud services and that shows the key statistics all in one place. And we can
also set up realtime alerts to warn when a service availability or a certain metrics that we are concerned about
degrades. As next thing in compute domain, let's talk about functions. Now functions are serverless computing. Now
many time if you heard about Azure being serverless a lot of time they are referencing or the person who's talking
to you is referencing to serverless uh computing or Azure functions which is a serverless computing service hosted on
Microsoft Azure. The main motive of u a function is to accelerate and simplify application development. Functions helps
us to run code on demand without we need to pre-provision or manage any Azure infrastructure. So Azure functions are
script or a piece of code that gets run in response to an event that you want to handle. So in short, we can just write a
code that you need for a problem at hand without actually worrying about the whole application or the infrastructure
that will be running uh that code. And the best of all the best is when we use functions, we only pay for the time that
our code runs. So what does functions provide or what does Azure functions provide? Azure functions allow users to
build applications using serverless uh simple functions with a programming language of our choice. So the current
programming languages that are supported is C, F, NodeJS, Java and PHP. So here we really don't have to worry about
provisioning or uh maintaining servers. If a code requires more resource, yes, Azure functions handles or it provides
the additional resources needed by the code. And the best part is we only pay for the amount of time the functions are
running. Not the resources but the amount of time the function is running. As next thing and moving to the new
domain, let's talk about the container domain in Azure. Now the container domain or the container service it
allows us to quickly deploy a production ready Kubernetes or a docker swarm cluster. Now what's a container? A
container is a standard unit of software that packages of code and all its dependencies. So the applications run
quickly and reliably from one computing environment to another. It could be testing uh to staging to developing
development environment to staging to production or from one production to another production or on premises uh to
cloud or one cloud to another cloud vice versa. Now imagine we had an option not to worry about the VM and just focus on
the application. Well, that's exactly what containers helps us achieve. So these container instances enable us to
focus on applications and not worrying about managing VMs or not worrying about the learning the new tools required to
manage the VMs or even the deployment. And our applications that we create they run in a container and running in a
container is what helps us to achieve all these not being able to manage or not needing to manage the virtual
machines. So these containers uh they can be deployed into the cloud using a single command if you're using a command
line interface and a couple of button clicks if we are using the Azure portal and these containers are kept uh
lightweight but they are equally secure as virtual machines. Let's talk about container services. Next thing uh the
container service or sometimes called as Azure Kubernetes service. It helps us to manage the containers. container is one
thing and a service that's used to manage the container is another thing. Now this Kubernetes service or ACS it
helps us to manage the containers. So let's expand on this a bit. So this Azure container service or ACS it it
actually provides a way uh to simplify the creation, configuration and management of a cluster of virtual
machines that are preconfigured to run containerized applications on top of them. Now deploying them, deploying
these containers might take like 15 to 20 minutes or deploying the virtual machines that run containers in it might
take 15 to 20 minutes. And once they are provisioned, we can actually manage them by using simple SSH tunnel into them.
And this ACS when it runs application, it runs applications from docker images. What does that mean? A docker images
makes sure that the applications the container runs are fully portable. Images are portable and ACS also helps
us to orchestrate the container environment. Not only that, it also helps us to ensure that these
applications that we run in containers can be scaled to thousands or even tens of thousands of containers. So in a
nutshell, managing an existing application into a container and running it using AKS or ACS is really easy or
that's what it is all about to make the application management or migration easy. Now managing the containerbased
architecture and we discussed that containers could be tens or even tens of thousands of containers. So managing
them is made simple using this container services and even training of model using a large data set in a complex and
resource inensive uh environment. This AKS helps us to simplify that uh environment. All right as next thing in
container domain let's talk about container registry. We spoke about registry a little bit when we spoke
about docker images. So container registry is a single place where we can store our images which are docker
images. When we use when we use containers it's it's docker images that we use for our image purposes. So these
container images are a central registry that can be used to ease container development by easing the storage and
management of container images. So there we can store all kind of images like u docker swarm or the images used in
docker swarm are in kubernetes. Everything can be stored in container registry in Azure. Now anytime we store
a container image it provides us an option for geo replication. What that means is that we can efficiently manage
a single registry replicated across multiple regions. Now these georrelication it actually enables us to
manage global deployments assuming we are having an environment that requires a global deployment. So it helps us to
manage global deployments as one entity because we are georrelicating. We would be updating we would be editing one
image and that image gets replicated throughout the global uh replication centers we would have set up and so just
one editing would have actually edited the global images and those global images would have provisioned the global
application. So one edit replication and then provisioning of the applications globalwide. And this replication also
helps us to helps us network latency because you know anytime an application needs to deploy it does not have to rely
on a single source which which can be reached only through high latency network. Because we have global
replications around the world. Anytime the application wants to check back, it would check back uh the application
which is in a very nearby location for the application itself. Global replication means that we are managing
it as a single entity that's being replicated across the multiple regions in the globe. As next thing in a
learning, let's talk about um Azure databases. Now this Azure databases are uh rational in fact they have many
flavors in them. Uh we're going to look at uh different uh flavors. No SQL NoSQL cache type of database that Azure
offers. So we're going to learn one at a time or we're going to learn one by one. So this Azure SQL database is a
relational database. In fact, it's a relational database as a service. It's managed by Azure. We don't get to do a
lot of management in it. So it's a relational database as a service uh based on Microsoft uh SQL server
database engine and this database is a high performance database. It is very reliable and uh it's very secure as well
and this high reliability high performance and for this high security we really don't have to do anything. It
comes along with it and uh it's managed by Azure. And there are two things that I definitely need to mention about Azure
SQL database that is it's an intelligent service. Number one, it's fully managed by Azure. And it also has this one good
thing which is it has built-in intelligence that learns app patterns and adapts to maximize performance and
reliability and data protection of the application. That's something that's not found in uh many of the other cloud
providers that I'm aware of. So I thought I'll mention it. So it uses built-in intelligence to learn about um
the user's database patterns and helps improve performance and protection and migration or importing data is very easy
when it comes to Azure SQL database. So it can be readily or immediately used for analytic reporting and uh
intelligent applications in Azure. As next thing let's talk about Azure Cosmod. Now, Azure Cosmodb is a database
service that is for NoSQL type and uh it's it's created to provide low latency and uh an application that scales
dynamically or that scales rapidly. Now, this Azure Cosmodb is an a globally distributed service and it's a
multimodel database. This can be provisioned in a click of a button. That's all we got to do if we need to
provision an Azure Cosmod in the Azure. It helps with scaling the database. Now we can elastically and independently
scale throughput and storage across this database and in any of the Azure geographic regions. It provides a good
throughput. It provides good latency. It provides good availability and um it provides or uh Azure promises a a
comprehensive SLA that uh no other database can offer. That's the best part about Cosmod. So this Cosmod was built
with global distribution in mind and it's built uh with the horizontal scale in mind and all this we can use by only
paying for what we have used. And remember the difference between Azure Cosmodb and SQL database is that Azure
Cosmod supports NoSQL whereas SQL doesn't. All right. Few other things about Azure Cosmodb is it allows users
to use key value graph column family and document data. It also gives users a number of AP options like SQL,
JavaScript, MongoDB and and few others that you might want to check in the document at at the time of reading. And
the best part here is that all that we mentioned we get to use only by paying for the amount of storage and throughput
that are required and the storage and the throughput can be elastically scaled based on the requirement of that R. All
right, let's talk about um Reddis cache. Discussion about Azure database won't be complete without we talking about Reddis
cache. Now Reddis cache is a secure data cache. It's also called it's also sometimes called as messaging broker
that provides high throughput and low latency access to data for the applications. Now Reddis cache is based
on an a popular open-source caching product which is reddis sometimes called as radius cache. Now what's the use
case? It's typically used to cache to improve the performance and scalability of a system that rely heavily on backend
data stores. Now performance when we use ziscache is improved by temporarily copying the frequently accessed data to
a fast storage located very close to the application. Now with reddis cache this fast storage is located in memory with
reddis cache instead of being loaded from the actual disk in the database itself. Now this reddis cache can also
be used as an in-memory data structure store. Not only that it can be used as an distributed non- relational database
and a message broker. So there are variety of uh use cases for this radius cache. And by using radius cache the
application performance is improved by taking advantage of the low latency and the high throughput performance that
this reddis cache engine provides. So to summarize this radius cache when we use reddis cache data is stored in the
memory instead of the disk to ensure that there is high throughput and low latency when the application needs to
read the data. It provides various levels of scaling without any downtime or interference. Now this radius cache
is actually backed by radus server and it supports u a string hashes linked list and various other data structures.
Now let's talk about security and identity services. Now identity management in specific is a process of
authenticating first and then authorizing using security principles. Now not only that identity management
involves controlling information about those principal identities. You might ask now what's an principal identity?
Now identity or principal identity are services, applications, users, groups and a lot more. The specialtity about uh
this identity management is that it not only helps authenticate and authorize principles in cloud, it also helps
authenticate and authorize principles or resources on premises especially when you run an hybrid cloud environment. So
all these services and features that this identity management helps us to get additional level of validation like
identity management can provide multiffactor authentication. It can provide access policies based on
condition permit or deny based on condition. It can also monitor suspicious activity and not only that it
can also report it. It can also help generate alerts for potential security issues and in a way to mitigate it can
send us an alert so we can get involved and prevent and a security accident from happening. So let's talk more about
identity management. So some of the services under security and identity management are Azure security center.
Now this Azure security center provides uh security management and threat protection across the workloads in both
cloud and in the hybrid environment. It helps control user access and application control to stop any
malicious activity if present. It helps us to find and fix vulnerabilities before they can be even exploited. It
integrates very well with analytic methods that helps us to identify or it gives us the intelligent to identify or
detect attacks and prevent them before it can actually happen. And it also works seamlessly with hybrid
environments. So you don't have to have one policy for on premises and one policy for the cloud. It's now a unified
service both for on premises and the cloud. The next service in security and identity would be keywalt. Now a key
wault is a service or a feature that help safeguard the cryptographic keys and any other secrets used by the cloud
applications and the services. In other words, this Azure key wault is a tool for securely storing and accessing the
secrets of the environment. I mean the secret keys. Now a secret is anything that you really want to have a very
tight control access like the certificates like the passwords stuff like that. Now if I tell you what
keywalt actually solves that would actually explain what keywalt is. Now key walt is used in secrets management.
It helped in securely storing the tokens, the passwords, the certificates. It helps in key management. You know it
really helps in creating and controlling the encryption keys that we would use to encrypt data. It helps in certificate
management. Talking about certification management, it helps us to easily provision, manage and deploy public and
private SSL TLS certificates in Azure and lot more. So in a nutshell, this key wall, it provides users the ability to
provision new walls and keys in just a matter of minutes. All that in a single command or all that in a couple of
button clicks. It also helps users to centrally manage their keys, secrets and policies. Next in the list, let's talk
about Azure Active Directory. Now, Azure Active Directory, it helps us to create intelligent driven access policies to
limit resource usage and manage user identities. What what does that mean? Now, this Azure Active Directory is a
cloudbased active directory and identity management service. Now, Azure Active Directory combines, you know, it's
actually a combination of the core directory services plus application access management plus identity
protection. And one good thing about this Azure, in fact, there are a lot of good things, but especially when you're
running hybrid environments, you might wonder well how this Azure Active Directory is going to behave. Now this
Azure Active Directory is built to work on on premises and cloud environment as well. Not only that, it also works
seamlessly with mobile applications as well. So in a nutshell, this Azure Active Directory, it acts as an central
point of identity and access management for our cloud environment. It also provides good security solutions that
protect against unauthorized access of our app and the data. Now that we've discussed about security and identity,
let's talk about the management tools that Azure has to offer. Azure provides builtin management and account
governance tools that helps administrators and developers that helps them to keep their resources secure and
very compliant and again it helps both in on premises and in the cloud environment. And these management tools
help us to monitor the infrastructure, monitor the applications. It also helps in provisioning and configuring
resources. It also helps in updating apps. It helps in analyzing threats, taking backup of the resources, build uh
disaster recoveries. It also helps in applying policies and conditions to automate our environment. we use u Azure
management tools and it's also used in cost control methods. So this Azure management plays a wide role across the
Azure services and in the management tools first comes the Azure advisor. Now this Azure advisor it acts as a guide to
educate us about Azure best practices. It throws recommendations that we can select on the basis of the category of
service and it also provides the impact it can have or the impact that would happen in our environment if we follow
the recommendations given and recommendations are uh first one is the recommendations are kind of templatized
and it throws the templatized recommendations. Not only that, it also provides customized uh recommendations
on the basis of the configuration, on the basis of our usage patterns. And these recommendations are not hard. It's
not like something that it recommends and then just leaves us hanging there. These recommendations provided are very
easy to follow, very easy to implement and see results. You can think of Azure advisor as an a very personalized cloud
consultant that helps you to follow best practices to optimize our deployments. It kind of analyzes our resources, our
configurations, our usage and then it recommends a solution for us that really helps in improving the cost
effectiveness, improving the performance, improving high availability and improving security in our Azure
environment. So with this Azure advisor, we can get a proactive, actionable and personalized best practice
recommendations. Now you don't have to be an expert. Just follow the Azure advisor and your environment is going to
be good. It also helps in improve the performance, security, high availability of our environment. And also it helps in
bringing down the overall Azure spend. And the best part is it's a free service that analyzes our Azure usage and
provides recommendations how we can optimize our Azure resource to reduce cost and reduce cost at the same time
boost the performance helps in strengthening the security and improve the overall reliability of our
environment. Next in the list would be network watcher. Watcher helps users identify and gain insights in the
overall network performance and the health of the overall environment. Now these Azure watchers provides enough
tools to monitor to diagnose to view the metrics and to enable or disable logs which means you know generate and
collect the logs for resources in the Azure virtual network. So with network watcher can monitor and diagnose issues
in networking without even logging into the virtual machines with just the logs which are real time we can actually come
to a conclusion what could be wrong in a certain resource in a VM or in a database you know but just looking at
the logs and not only that it's used for analytic or to gain some intelligence of what's happening in our network we can
gain a lot of insight to the current network traffic pattern using the security group flow logs that this
network watcher offers. It also helps in investigating VPN connectivity issues using detailed logs. Now you might or
might not know that you know VPN troubleshooting requires both parties or it involves two parties. you know the
person the network administrator on this side and the network administrator on the other side and they will have to
check logs in their end and we'll have to check logs and our end stuff like that but with the network watcher it
kind of takes it to the next level the logs itself we could easily identify which side is having the issue and
suggest an appropriate fix and the next in the list would be Microsoft Azure portal now this Microsoft Azure portal
it provid provides a a single unified console to perform various number of activities like building not only
building managing and monitoring the web applications that we build. Now this portal can be used to organize our
environment or the appearance of the environment or the visual of the environment based on our work style. And
using Azure portal, users can control who gets to manage or access the resources all from the Azure portal. And
this Azure portal gives a very good visibility on the spends that happen on each resource, right? And if we can
customize it, we can also identify spends based on team, spends based on days, spends based on department, stuff
like that. So it kind of gives us a good visual of where the money is spending or where is the bill consumed within the
Azure environment. Next in the list would be Azure resource manager. Now Azure resource manager enables us to
manage the usage of the application resources. Now we use resource manager to deploy, monitor and manage solution
resources as a group as if it's one single entity. Now the infrastructure of our application is typically made of
various components which includes virtual machine storage virtual network web app database servers some other
third party services that we might use in our environment and they are by nature separate services but with Azure
resource manager we don't see them as different components or different entities instead we see them as related
services in uh group that supports an application. Now we kind of get the relation between them instead of you
know letting them spread. Azure resource manager identifies the relation between them and helps us to visually see them
all as one or single entity. Not only that, Azure resource manager helps or it ensures that the resources that we
provision or deploy at a constant rate along with the other application. It also helps users to visually see their
resources and how they are connected and that helps in managing the resources a lot better. Resource group also is used
to control who can access the resources within the users's organization. Kind of gives you the fine grained control over
who gets to access and who does not get access. And the last one in the management tools would be automation.
And this automation gives us the ability to automate, configure and install upgrades across hybrid environments. It
provides a cloud-based automation and configuration service. Not only that, this can be applied for non-asure
environments as well which is on premises. So some of the automation we could do is process automation, update
management automation, configuration features automation, stuff like that. And this Azure automation provides
complete control during the deployment operation and also during the decommissioning of the workloads and
resources. With automation we can actually automate uh time consuming or mundane or any task that's errorprone
because of uh human errors those things can be automated. So irrespective of how many times you run it, it's going to run
the same way and that really helps in reducing the overall time and also the overhead cost because a lot of the
things are automated which means it's human error-free which means the application is not going to break and
keep running for a longer time. With automation we can actually build a good inventory of operating system resources
and configuration items all in one place with ease. And this really helps in tracking the changes and investigating
the issue. Let's say something happened because we have automation because it's logging the configuration changes. It's
easy to track, easy to identify, easy to identify what has changed lately that has broken the environment, go back and
fix it or kind of roll it back. That solves the problem. And that actually summarizes the Azure management tools or
management services. Now let's talk about the networking tools or the networking services available in Azure.
There are variety of services especially networking services that Azure offers and I'm sure it's going to be an
interesting one. Let's begin our discussion with content delivery network. Now the content delivery
network in short CDN it allows us to perform secure and a very reliable content delivery. Not only that, it also
helps in accelerating the delivery time or in other words reducing the delivery time also called as load times. It also
helps in saving bandwidth and increases in responsiveness to the application. Let's expand on this. The content
delivery network is actually a distributed network of servers that can efficiently deliver web content to
users. Now, CDN's we're going to use the word CDN here. CDNs store cacheed content on global edge servers also
called as uh pops point of presence locations that are very close to the end users. So, the latency is minimized.
It's like taking a copy of the data or taking a multiple copy of the data and storing it in different parts of the
world and whoever is requesting it the data gets delivered to them from a server which is very locally to them. So
this CDN offers developers a global solution for rapidly delivering high bandwidth content to users by caching
the content in a strategically placed location which is very near to them. So these content delivery networks it
really helps in handling that's one advantage you get for content delivery network that's we can handle spikes and
heavy loads very efficiently and we can also run analytic against the logs that gets generated in content delivery
network which helps in gaining good insight on the workflow and what would be the future business need for that
application and this just like a lot of other services. This is on a pay as you go type. So you use the resource first
and then you only pay for what you have used. The next one in networking would be express route. Now express route is
actually a circute or a link that provides an a direct private connection to Azure and because it's direct it
gives low latency link to Azure. It gives good speed and reliability for the Azure data transfer. It could be on
premises to Azure. So it gives very good speed. It gives increased reliability and low latency for that connection.
Let's expand on this a bit. And now this express route is an service that actually provides an private connection
between Microsoft data center and infrastructure in our premises or in a different collocation facility that we
might have. Now these express routes uh do not go over the public internet and because they don't go over the public
internet they offer a high security reliability and speed and low latency compared to the connections which are in
the internet because it's fast because it's reliable because it it has low latency it can be used as an extension
of our existing data center. You know users are not going to feel the difference whether they are accessing
services from an on- premises or in the cloud environment because latency is minimized as much as possible. Users are
really not going to see the difference. And because it's a private line and not an public internet line, it can be used
to build hybrid applications without compromising privacy or the performance. Now these virtual private cloud these
express routes can be used for taking backups. If assume a backup going through the internet that would be a
nightmare. If you use express route for backups that's going to be fast and imagine recovering a data through the
internet from the cloud through the internet to the on- premises in a time of disaster. That would be the worst
nightmare. So these express routes can be used not only to backup but also to recover the data because it provides
good speed low latency. Recovering the data is going to be lot sooner. The next product or service we're going to
discuss in networking is Azure DNS. Now Azure DNS allows us to host domain name in Azure and these domain names come
with an exceptional performance and availability. Now, Azure DNS is used to set up and manage DNS zones and records
for our domain name in the cloud. Now, this Azure DNS is a service for DNS just like the name says and it provides name
resolution by using Azure's infrastructure and uh by using this domain, we can actually manage the DNS
ourselves through the Azure portal with the same credential. Imagine having a DNS provider which does not even belong
in our IT. Imagine that environment. You know, we would have a separate portal to manage the DNS environment. Now those
are gone and now we can actually manage the DNS in the very same Azure portal where we use the rest of the other
services. And this Azure DNS very much integrates with other DNS service providers. It uses a global network of
name servers to provide fast response to DNS queries. And these domains are having additional availability compared
to the other uh domain service providers availability promises. These are going to have more availability than the rest
because most of the servers are maintained by Microsoft and it helps resolve sooner. It helps reyncing let's
say a server fails. It kind of helps reyncing with the rest of the servers. So all the Microsoft's environment, all
the Microsoft's global network of name servers kind of ensures that our domain names are resolved properly. Not only
properly but also are available most of the time. Right. Next in the list in networking services is virtual network.
I'm sure this is going to be very interesting and I'm sure you're going to like it. So this networking or virtual
networking in Azure, it actually allows us to set up our own private cloud in the public cloud. It gives us an
isolated and highly secure environment for our application. Let's expand on this. So this Azure virtual network
helps us to provision Azure virtual machines and uh it helps us to securely communicate with other onremises and
internet networks. It also helps in controlling the traffic that flows through or flows in and out of this
virtual network to other virtual networks and to the internet. Now this Azure virtual network sometimes called
as VNET is actually a representation of our own network in the cloud. It's actually a logical isolation of the
Azure cloud dedicated to our subscription. All our environments are provisioned in a VNET that is separate
from another customer's VNET. That way we have that logical separation there. So this virtual network can also be used
to provision VPNs in the cloud. So we can connect the uh cloud and the on premises uh infrastructure and lot more
especially in a environment where we have hybrid environment surely we will be using virtual network because that's
going to require a VPN for secure data transfer in and out of the cloud and in and out of the on premises environment.
All right. So it kind of gives us an boundary for all the resources. So all the traffic between the Azure resources
they kind of logically stay in between or logically stay within the Azure virtual network. And here we can design
the network. It's given over to us. You know you can pick the IP, you can pick the routing, you can pick the subnet.
You know, lot of freedom is given or I would say a lot of control on how the network is designed. It's not like
something that's already cooked and we only get to use it. No, we can actually build the network from the scratch. We
can pick the IP address that we like. We can pick, you know, which subnet needs to communicate with the other subnet,
stuff like that. And like I said, if you are using hybrid environment, you definitely would be requiring a virtual
network because it helps connect the on premises and the cloud in a secure fashion using VPN. The last product
we're going to discuss in networking is a load balancer. This load balancer actually provides application a good
availability and a good network performance. So how does it work? It actually works by load balancing the
traffic to and from uh the virtual machine and the cloud resources. Not only that, it also load balances between
uh cloud and cross premises virtual networks. With Azure load balancer, we can actually scale our application and
create high availability for our services, which means our application will be available most of the time. If
any of the server goes dead, the server does not get traffic. What happens if the server gets traffic? User is going
to experience downtime. What happens if the server does not get traffic? User won't experience any downtime. The
connection is shifted to an healthy service. So the user experiences uptime all the time. So this load balancer
supports inbound and outbound scenarios and it provides low latency. It gives high throughput of the data transfer and
we can actually scale up the flow of the TCP and UDP connections from hundreds to thousands to even millions because we
have a load balancer now in between the user and the application. So how does it operate? This load balancer actually
receives the traffic and it load balances the traffic to the backend pool of instances connected to it according
to the rule and the help probe that we set. That's how it maintains high availability. So what does load balancer
help? It helps in creation of high available scalable application in the cloud in minutes. It can be used to
automatically scale the environment with the increasing application traffic. And one feature of load balancer is to check
the health of the user's application instance and it removes or it stops sending the request to the unhealthy
instance and kind of shifts that connection to the healthy instance. That way a user or a connection does not get
stuck with an instance that's not healthy. That's all that you need to know about the networking services. Now
let's talk about the storage services or the storage domain in Azure. Now Azure storage in general is a a Microsoft
manage service providing cloud storage which basically is highly available, secure, durable, scalable and redundant
because it's all managed by Azure. We don't get to manage a lot of it. And these Azure stoages are a group of
storage services. They cater different needs and the storage products include Azure blobs which is actually an object
storage. It includes um Azure data lake. It includes Azure files as you see it it includes Azure cues. It includes Azure
tables and lot more. But let's start our discussion with Azure store simple. Azure Store simple is an hybrid cloud
storage solution that actually lowers the cost of storage to nearly 60% of how much you would be actually spending
without using it. So, Azure Simple Storage or Store simple is an integrated storage solution that manages the
storage task between on premises and the cloud storage. What I really like about Azure is that it's built around a hybrid
environment in mind. There are a lot of other cloud providers that are there where running an hybrid environment is a
big challenge. You know, it has some compatibility. You won't be able to find an hybrid or a on premises and cloud
solution for your need stuff like that. But with Azure, especially when it comes to storage, a lot of the things that
we're going to see, it clearly is designed with hybrid environment in mind. All right. So, let's come back and
talk about store simple. So, store simple is an very efficient, cost-effective and a very easily
manageable SAN storage area networking solution in the cloud. I thought I'll throw in this information. The reason
why it got store simple is really because it uses store simple 8000 series devices which are used in Azure data
center and this uh store simple or simple storage it comes along with storage tearing to manage uh the stored
data across the various storage media. So the current the very current data is actually stored in on premises on solid
state drives and data that is used less frequently is stored in uh HDDs or hard disk drives and the data that requires
archived or that needs to be archived very old data let's say less frequently used data candidate for archived they
are actually pushed uh to the cloud. So you see how this storage sharing automatically happens in store simple.
And one another cool feature of store simple is that it enables us to create an ondemand and scheduled backups of
data and and then store the data locally or in the cloud. And these backups are actually taken in the form of
incremental snapshot which means that they can be created and restored quickly. It's not a complete backup.
It's an incremental backup. And these cloud snapshots they can be critically important when there is a disaster and
when there is a disaster recovery scenario because these snapshots can be called in and they can be put on storage
systems and then they become the actual data. So recovering is faster if you have proper scheduled backups or if you
have frequent backups. And this storage simple it really helps in easing our backup mechanism which means it kind of
eases our disaster recovery steps or procedures as well. So the store simple it can be used to automate data
management, data migration, data movement, data taring across the enterprise both in cloud and on
premises. It actually improves the compliance and accelerates the disaster recovery for our environment. And if
there is one thing that's increasing every new day in our environment, that would be storage. And this store simple
addresses that need. And we really don't have to pre-plan or or think indeed for having a proper storage because now we
have a simple storage available in the cloud. And moreover, it's on a pay as you go type. So not much pre-planning on
storage is needed. Yes, there would be a need but not as much as I would without the cloud or without the simple storage.
And the next service under storage that we would like to discuss is the data lake store. This data lake store or
storage it's a cost effective solution for big data analytics in specific. So let's expand this. So this data lake
storage is an enterprisewide repository for big data analytic workload. Now that's the major service that's
dependent on this data lake store. And this data lake enables us to capture data of any size of any type and of any
injection speed and it kind of collects them in one single space or in one single place for operational efficiency.
I mean operational efficiency and for analytic purpose. Hadoop in Azure is very dependent on this data lake storage
and this uh data lake store is designed with performance for analytics in mind. So anytime you think of or anytime
you're using analytic in the cloud or anytime you're using Hadoop in the cloud in Azure we are definitely using or we
will be to the most part or or the normal procedure or the right storage to pick would be data lake store in Azure.
It's designed with security in mind. So anytime we use Azure storage we can be rest assured that we are using storage
from within a data center which has or which was built with security in mind. So this data store also uses Azure blob
storage behind the scenes for global scale durability and for performance. Let's talk about blob storage. Now blob
storage provides large amount of storage and scalability. Now this blob storage is the object storage solution for Azure
cloud. Let's expand a bit on blob storage. Azure blob storage is Microsoft offering for object storage. Now this
blob storage is optimized for storing massive amount of unstructured data which could be text or binary data. It's
designed and it's optimized for rapid reads. If I explain to you on what scenarios we would be using blob storage
that might help you get a good understanding of what blob storage is. So it's help or its design as of now
it's being used in many IT environments to serve images or documents directly to the browser. It helps in storing files
for distributed access. A lot of fetchers can fetch data from Azure blob storage and it currently helping users
stream video and audio. It's currently being used for writing log files. It's currently being used to store data as
backup and restore at a later point in times of disaster recovery. It also is used as an archiving storage in lot of
cloud IT environments. It's widely used in storing analytic data. Not only storing but also running analytic query
against the data stored in it. So that's a wide use case for blob storage. Not only that, in addition to all that we
mentioned, uh it also supports versioning. So anytime somebody updates an data, a new version gets created,
which means at any point I can roll back as and when needed. And it provides a lot of flexibility on optimizing the
users's storage need. It also supports uh taring of the data. So based on need when I actually explore I would find a
lot of options I can pick from that uh you know suits to my unique storage environment or unique storage need and
like I said it stores unstructured data and this unstructured data is available for customers through restbased object
storage environment. The next product in storage service would be Q storage. Now Q storage provides durable cues for
large volume cloud services. It's a very simple and a cost-effective durable messaging queue for large workloads.
Let's expand this Q storage for a moment. Now this Q storage is a service for storing large amount of messages
that can be accessed from anywhere in the world through HTTP and HTTPS calls. A single Q or a single cube message can
be up to like 24 KB in size. And a single queue can contain millions of such 24 KB in size messages. And how
much can it hold? It can hold up to the total capacity of the storage account itself. So that's kind of easy to
translate how much would it hold. And this Azure Q storage, it provides an messaging solution between applications
and components in the cloud. What does it help? It helps in designing an application for scale. It helps in
decoupling the application. So you know it's not very dependent or sometimes it's not at all dependent on the other
application because now we have a queue in between which kind of translates or which kind of connects or which kind of
decouples both the environment. Now we have a queue in between both the environment can scale up or scale down
independently. The next in the storage service would be file storage. Let's talk about file storage. Now these Azure
files provide secure, simple and managed cloud file shares. Now with fileshare in the cloud, it actually extends the user
servers on premises performance and capacity and lot of familiar tools for the cloud fileshare management can be
used along with the file storage that we're talking about. So let's expand a bit on file storage. Now this Azure
files or Azure file storage offers a fully managed file shares in the cloud that can be accessed via the uh SMB
protocol server message block protocol. Now this Azure file shares can be mounted concurrently by cloud or in on
premises deployments. Lot of operating systems are compatible with it. Windows are compatible, Linux is compatible, Mac
OS is compatible. In in addition to all these being able to run on on premises and on the cloud or being able to access
from on premises and on the cloud, it can also offer cache for caching uh the data and keeping it locally. So it's
immediately available when needed. So that's some additional feature I would say that's some advanced feature that it
offers compared to the other file shares available in the market. Let's talk about table storage. Let's talk about
table storage. Now table storage is a nosql key value pair storage for quick deployments with large semi-structured
data sets. The difference between one important thing to note with table storage is that it has a flexible data
schema and also it's highly available. Let's expand a bit on table storage. So anytime you want to pick a schemalless a
nosql type table storage is the one we'll end up picking. It provides an key pair attribute storage with a
schemalless design. This table storage is very fast and very cost effective for many of the applications and for the
same amount of uh data. It's a lot cheaper when you compare it with the traditional SQL data or data storage. So
some of the things that we can store in the table storage are of course they're going to be flexible data sheets uh such
as uh user data for web application address books device information and other types of metadata for our service
requirements and it can have any number of tables up to the capacity limit of the storage account. Now this is not
possible with SQL. This is only possible with NoSQL especially with table storage in Azure. explanation of storage really
concluded the length and breadth of the explanation this CEO was giving his uh IT personal but this IT personal is not
done with it yet. He still has a question even after this lengthy discussion and his question was well
there are a lot of other cloud providers available. What made you specifically choose Azure? I mean from the kind of
question that he asked we can say that he is very curious and uh he definitely had asked an thoughtful question. So his
CEO went on and started to explain about the uh other capabilities of Azure or how it kind of outruns the rest of the
cloud providers. So he started or uh he again started his discussion but from a different angle now. So he started to
explain what are the capabilities or how Azure is better than uh the competitors. So he started with explaining the
platform as a service capabilities and I'm going to tell you what the CEO told his ID person. So this platform as a
service or in platform as a service the infrastructure management is completely taken care by uh Microsoft allowing
users to focus completely on the innovation. No more infrastructure management responsibilities. Go and
focus on innovation. That's that's a fancy way of saying it. When we buy platform as a service, that's what we
get. We can contribute our time on innovation and not just maintaining the infrastructure. And u Azure especially
is u net friendly. Azure supports the net programming language and um it has or it is built or designed or it is
optimized to work with old and new applications deployed using net programming framework. So if your
application isnet most of the time you would end up picking Azure I mean if you try to
compare most of the time you would end up picking Azure as your cloud service provider and the security offerings that
Azure offers is it's designed based on the security development uh life cycle which is an industry-leading assurance
process. When we buy services from Azure, it assures that uh the environment is designed based on
security development life cycle. And like I mentioned many times in the past and I would like to mention it again,
Azure has well thought about the hybrid environments which a lot of other cloud providers have failed. So it's very easy
to set up an hybrid environment to migrate the data or not to migrate the data and still run a hybrid environment.
They work seamlessly with the Azure because Azure provides seamless connection across on premises data
centers and the public cloud. It also has a very gentle learning curve. If you look at the uh documentation, it's
picture and the documentations are neat and clear. Would really it would encourage you to learn more. It would
encourage you to think and imagine and try easily get a grasp of how services work. So it has a very gentle learning
curve. Azure allows the utilization of technologies that several business have used for years. So there is a big
history behind it. It has a very gentle learning curve. the the certifications, the documentations, the stage bystage
certification levels. It's all very gentle learning curve which is generally missing in other cloud service
providers. Now this would really impress the CTOs or or people working in finance and budgeting. If an organization is
already using Microsoft software, they can definitely go and avail or be bold and ask for a discount that can reduce
the overall Azure spending. In other words, overall pricing of the Azure. So that's what helped or they are the
information that helped the CEO pick Azure as his cloud service provider. And then the CEO goes on and talks about the
different companies that are currently using Azure and they are definitely using Azure for a reason like Pixar,
Boeing, Samsung, EasyJet, Xerox, BMW, 3M they are major multinational, multi-billion companies. They rely, run,
operate their IT in Azure. And this CEO has a thought that his IT person is still not very convinced unless and
until he shows him a visual of how easy things are in Azure. So he goes on and explains about a practical application
of Azure, which is what exactly I'm going to show you as well. All right, a quick project on building an Azure app
using or building a net application in Azure web app and making it connect to an SQL database will solidify all the
knowledge that we have gained so far. So this is what we're going to do. I have an Azure account open as you see logged
in and everything is fresh here. Let me go to resource group. There's nothing in there. It's it's kind of fresh, right?
And I'm logged in and this is what we're going to do. So we're going to create an application like this which is nothing
but an todo application a to-do list application which is going to run from the web app get information from us and
save it in the database that's connected to it. So you can already see it's a two-tier application web and DB. All
right. So let me go back to my Azure account. The first thing is to create an resource group. Let's give it an a
meaningful name. Let's call it Azure Simply Learn. All right. And it's going to be a free trial. And the location,
pick one that's nearest to you or, you know, wherever you want to launch your application. Now, for this use case, I'm
going to pick Central US and create. It's going to take a while to get created. There you go. It's created.
It's called Azure Simply Learn. Now, what do we need? We need an web app and an a separate SQL database. Let's first
get our web app running. So, go to app services and then click on add. It's not the web app plus SQL that we want. We
want web app alone for this u example. So, let's create an web app. Uh give it a quick name. Let's call it um Azure
Simply Learn. The subscription is free trial and I'm going to use my existing resource group. A resource group that we
created some time back. It's going to run out of windows and we're going to publish uh the code. All set. We can
create it. All right. While this is running, uh let me create my uh database. Right? SQL database. Create a
database. Give it a name. Let's call it Azure SimplyLearn DB. Put it in our existing resource
group that we created. It's going to be a blank database. All right. And it's going to require some uh settings like
the name of the server and the admin login, the password that goes along and in which location this is going to be
created. The server name is going to be Azure SimplyLearn DB. That's the server name. And the admin login can be what
can be the admin login name. Let's see. So let's call it simply learn. That's my admin login name. And let me pick a
password. Click on create. So what have we done so far? We have created an web app and we have created an uh a database
in the resource group that we have created. So if I go to resource group, it's going to take some time before
things show up. So if I go to my resource group, I only have one resource group as of now. Azure simply learn. And
there I have a bunch of resources being created and it's still being created. Right? In the meantime, I
have my application right here that's running out of uh or that's in Visual Studio as of now. Right. So once the
infrastructure is set and ready in the Azure console, uh we're going to go back to Visual Studio, feed these inputs in
the Visual Studio. So the code knows what the database is, the the credentials to log to the database,
stuff like that. So we're going to feed those information in Visual Studio. By that we're actually feeding it into the
application and then we're going to run it from there. Deploying this application takes uh quite a while. We
really got to be patient. All right. Now we have all the resources that we need for the application to run. Here is my
uh database and here is my app service. There's one more thing we need to do that is um create an firewall exception
rule. So one more thing needed is to create an firewall exception uh rule. Right? So the application is going to
run from my local desktop and it's going to connect to the uh uh database, right? So let's add an exception rule by simply
adding the client IP. It's going to pick my IP, the IP of laptop I'm using as of now and it's going to create an
exception to access the database. So that's done. Now we can go back to our Visual Studio. I already have a couple
of um apps running or a couple of uh configurations pushed from uh Visual Studio. I'm going to clean that up. If
you're doing it for the first time, you you may not uh need to do this. All right. So, let's start from
the scratch. This is very similar to how you would be doing in your environment. All right. So we're going to uh select
an existing Azure app service. Now before that I have logged in as you can see I have logged in with my credential.
So it's going to pull few things automatically from my Azure account. So in this case I'm going to use an
existing Azure app. So select existing and then click on publish. All right. If you recall, these
are the very same resources that we created a while back. All right, we have clicked on save
and it's uh running kind of validating the code and it's going to come up with an URL. Now, initially the URL is uh not
going to work because we haven't mapped the application to the database. So, that would be the next thing.
All right. So, the app has been published and it's running from my uh web app. As of now, it's going to throw
an error. Like you see, it's throwing an error. That's because we haven't mapped the app and the DB together. So let's do
that. All right, let's do that. So let's go to server explorer. Uh this is where uh we're going to see our uh uh
databases that uh we have created. Now let's quickly verify that. Go back to uh the resource group, right? Appropriate
resource group which is right here. And uh here I have my uh database Azure SimplyLearn
database. All right. It has some issues connecting uh to my uh database. Give me a quick moment. Let's fix it.
All right. So we'll have to map the database into this application. All right. So let's go to the solution
explorer. Click on publish and a page like this get shown. And from here uh we can go to
configure. Here is our web app. All right. With all its uh credentials. Let's validate the connection number
one. All right. And then click on next. This is my DB connection string, right? Which the app is going to use to
connect to my DB. Now, if you recall, RDB was uh Azure uh simply learn DB and that's not being shown here. So, let's
fix that, right? So, let's fix that. Click on configure and here uh
let's put our uh DB servers uh URL. Now before that let's change this to SQL server. All right. And then in here uh
put the DB's URL. So go back to Azure. Here is my DB or server's name. Put that here. Right. The username to connect to
the server. That's right here. Put that in. And the password to connect to the server. Let's
put that in. All right. It's trying to connect to our Azure portal or the Azure
infrastructure. And here is my database. If you recall, it's Azure SLDB. That's the name of the
database. Let's test the connection. Connection is good. Click on okay. So now it's showing up correctly.
Azure simply learn DB. That's the name of uh the database that we created. Now it's
configured. All right, let's modify the data connections. Right, let's map it to the appropriate
database again. All right, so our name of the database is Azure simply learn DB and then uh it's going to be SQL
server that's the data source. The uh username is simply learn and the password
is what we have given in the beginning. All right, let's validate the connection. It's good. Click
okay. Now we're all set and ready to publish our application again. Now the application knows how to connect uh to
the database. We have educated it with the u the correct connection strings the DNS name the username and the password
for the application to connect to the database. So, Visual Studio is building this project and once it is up and
running, we'll be prompted with an URL uh to connect and anytime we put or we give inputs to the URL that's going to
receive the input and save it in the database. All right. So, here is my uh to-do list
app and uh I can start uh creating to-do list for myself. All right. So, I have the items already listed. U I can create
an entry and these entries get stored in the u in the database. I can create another entry and I'll take the dog for
a walk that's going to get stored. I can create another entry uh book tickets for uh scientific uh
exhibition and that's going to receive and put that in the database. And that concludes our session. So through this
session we saw how I can use Azure services to create web app and connect that to the DB instance and how those
two services which are decoupled by default which are separate by default how I can you know use the connection
strings to make connection between the app server and the database and be able to create an working app. So let us
first understand what exactly is cloud security. Cloud security is a set of tools and practices designed to protect
businesses from threats both inside and outside the organization. As companies move to use more online tools and
services, ensuring their cloud security is essential. Let's start with a simple case study. Imagine an IT consulting
firm. The owner Alex decides to move the company's operation to the cloud to streamline project management, client
interactions, and data storage. Alex chooses a cloud service provider to host all the data and application. But here's
the thing, the internet can be a dangerous place. Just like you would want to protect your home from burglars,
Alex needs to protect his business from cyber criminals who might try to steal sensitive client data or disrupt
services. This is where cloud security comes in. Cloud security involves using various tools and practices to keep your
data safe. Ensure only authorized people can access it and protect it from the potential threats. For Alex, this means
making sure the client's information is encrypted so even if someone intercepts it, they can't read it. Setting up
strong passwords and multiffactor authentication to make sure only the right people have access and regularly
updating software to fix any security weaknesses. So in simple terms, cloud security is like having a high-tech
security system for your online and data applications. It's about making sure your valuable information is safe and
sound just like you would protect your most prized possessions. But why is cloud security so important? Lately,
terms like digital transformation and cloud migration are often heard in businesses. These terms means different
things to different companies. But then they both highlight the need to change. As businesses adopt these new
technologies to improve their operations, they face new challenges in keeping everything secure while
maintaining productivity. Modern technologies allows businesses to work beyond traditional office setups. But
switching to cloud-based systems can be risky if not done correctly. To get the best results, businesses needs to
understand how to use these technologies safely and effectively. This means finding the right balance between using
advanced cloud tools and ensuring strong security practices are in the place. Before we jump in, let's talk briefly
why the cloud is such a gamecher. Cloud computing offers ondemand selfservice, broad network access, rapid elasticity,
and scalable resources, making it incredibly convenient for businesses. You can access resources from anywhere,
scale up or down based on your needs, and enjoy a flexible and efficient computing environment. However, these
benefits can introduce unique security challenges that needs to be addressed. Data breaches. Encryption is your best
friend. Make sure that your data is encrypted both at rest and transit. Think of it like this. If someone steals
a lock box but doesn't have the key, they can't access what's inside. Access controls. Only authorized users can
access sensitive data. Use role based access control RBAC and multiffactor authentication MFA to make this happen.
It's like having a bouncer at a VIP party. Only the right people get in. Then we have insecure APIs. Secure
coding practices. Always use secure coding practices to protect your APIs. Validate inputs and use proper
authentication. It's like making sure your doors and windows are locked tight. Regular audits. Regularly audit your
APIs for vulnerabilities. Think of it as a routine checkup to keep everything is on top shape. Then we have insider
threats. To overcome this, we have monitoring activities. Keep an eye on what's happening within your
organization. Use monitoring tools to detect unusual activities. It's like having security cameras inside your
premises. Employ training. Educate your employees about security's best practices. Sometimes a simple mistake
can lead to big problems. Training is the first line of defense. The next threat we have is the denial of service
attacks. Traffic management. Use traffic management tools to filter out malicious traffic. It's like having a floodgate to
control the flow of water and keep out the bad stuff. Redundancy and load balancing. Implement residency and load
balancing to ensure your system can handle sudden spikes in traffic. It's like having multiple lanes on a highway
to prevent traffic jams. Then we have advanced persistent threats, APS. To overcome this, we should ensure
continuous monitoring. Always monitor your system for any signs of intrusion. Use advanced security measures to detect
and respond to threats quickly. Think of it as having a night watchman on duty 24 into 7. Then we have incident response
plans. Having a plan in place to respond to security incidents. It's like having an emergency drill. You'll know exactly
what to do if something goes wrong. Best practices. Here are some of the best practices to enhance cloud security. At
first, implement strong access controls. Use multiffactor authentication and limit access based on rules to ensure
that users only have the permission they need. Regularly update and patch systems. Keep your systems up to date
with the latest patches to close vulnerabilities that could be exploited by attackers. Then we have employee
training. Educate your employees on security's best practices such as recognizing fishing attempts and
creating strong passwords. Then we have network segmentation. Divide your network into smaller, more secure zones
to limit the impact of potential breaches. Monitoring and logging. Continuously monitor your environment
and maintain detailed logs to detect and respond to threats promptly. Cloud security is an ongoing effort that
requires vigilance, right tools, and a commitment to best practices. By staying informed, using the right security
measures, and following best practices, you can protect your cloud environment from threats. Let's now take a look at
some individual features of S3. Starting off with life cycle management. So life cycle management is very interesting
because it allows us to come up with a predefined rule that will help us automate the transitioning of objects
from one storage class to another without us having to manually copy things over. Of course you could imagine
how timeconuming that would be if we had to do this manually. So we're going to see this
very soon in a lab. However, let me discuss how uh how this works. So once we uh it's basically a graphical user
interface. It's very very simple to use once you come up with these uh life cycle management rules. But you're going
to define two things. You're going to define the transition action and the expiration action. So the transition
action is going to be something like well I want to transition an object from maybe it's all objects or maybe it's
just a specific type of object in a folder example that has a specific prefix from one storage class. let's say
standard to standard inactive or infrequent access maybe only after 45 days after at least a minimum of 30 days
like we spoke of before and then maybe after 90 days you want to transition the objects in IIA to right away glacier
deep archive or 180 days you come up with whatever combination you see fit okay it doesn't have to be sequential
from S3 to IA to one zone etc etc Because like we discussed before, it depends what kind of objects that you're
interested in putting in one zone. Objects that you don't really mind losing if that one availability zone
goes down. So you're going to be deciding those rules. It ends up that this even is not a simple task because
you have to monitor your usage patterns to see which data is hot, which data is cold and what's the best kind of life
cycle management to implement to reap the benefits of the lowest cost. So you have to put somebody on this job and
make the best informed uh decisions based on your access patterns and that is something that you need to
consistently monitor. So what we can do is we can instead opt for something called S3 intelligent
taring which basically analyzes your workload using machine learning algorithms. And after about a good 30
days of analyzing your access patterns will automatically be able to transition your objects from S3 standard to S3
standard infrequent access. Okay? It doesn't go past the IIA1. doesn't go after the glacier and whatnot. Okay. So,
it can then offer you um that at a reduced um price overhead. So, there is a monitoring fee that is introduced in
order to uh implement this feature. It's a very nominal, very very low monitoring fee. And the nice thing is is if ever
you take out an object out of the infrequent access before the 30-day limit as we spoke of before, you will
not be charged um an overhead charge because of that. Why? Because you're using the
intelligent tearing. You're already paying an overhead for the monitoring fee. So at least in that sense the
intelligent tearing will take the object out of IIA and put it back into the S3 standard class if you need access to it
before the 30 days and in that case you won't be charged that overhead. So that is something that is very
um that is very um good to to to do in order not to have to put somebody on that job. So yes, you're paying a little
bit of overhead for that monitoring fee, but at the other side of the spectrum, you're not investing in somebody uh
working many hours to monitor and put into place a system to monitor your uh data access patterns. So let's take a
look at how to do this right now. Let's implement our own life cycle management rules. So let's now create a life cycle
rule inside our bucket. First off, we're going to need to go to the management tab in the bucket that we just created.
And right on the top, you see right away life cycle rule. We're going to create life cycle rule. And we're going to name
it. So, I'm just going to say something very uh simple like simply learn uh life cycle rule.
And we have the option of creating this rule for every single object in the bucket or we can limit the scope to a
certain type of file perhaps with a prefix like I could see one right now something like log. So anything that we
categorize as a log file will transition from one storage tier to the next as per our instructions. We're doing this
because we really want to save on costs, right? It's not so much of organizing what's your older data versus your newer
data. It's more about reducing that storage cost as your objects get less and less used. So in this case, logs are
a good fit because perhaps you're using your logs for the first 30 days. You're sifting through them. Um you're trying
to get insights on them, but then you kind of move them out of the way because they become old data and you don't need
them anymore. So, we're going to see how we can uh transition them to another pricing tier, another storage tier. Uh
we could also do this with object tags, which is a very powerful feature. And in the life cycle rules action, you have to
at least pick one of these options. Now, since we haven't enabled versioning yet, what I'm going to do is just select
transition the current version of the object between these storage classes. So as a reminder of what we already covered
in the slides, our storage classes are right over here. So the one that's missing is
obviously the default standard storage class, which all objects are placed in by default. So what we're going to say
is this. We want our objects that are in the default standard storage class to go to the standard inactive access storage
class after 30 days. And that'll give us a nice discount on those objects being stored. Then we want to add another
transition. And let's say we want to transition them to Glacier after 90 days. And then as a big finale, we want
to go to Glacier deep archive. You can see the rest are grayed out. Wouldn't make sense to go back. And maybe after
180 days, we want to go there. Okay. Now, there's a little bit of um a warning or call to attention here.
They're saying if you're going to store very small files um into Glacier, not a great idea.
There's an overhead in terms of metadata that's added and also there's an additional cost associated with storing
small files in Glacier. So, we're just going to acknowledge that. Of course, for the demonstration, that's fine. In
real life, you'd want to store very big tar files or zip files that had, you know, one or more lock files in there.
Okay, that would bypass that that search charge that you would get. And over here you have the timeline summary of
everything we selected up above. So we have here after 30 days the standard inactive access, after 90 days glacier,
and after 180 days glacier deep archive. So let's go and create that rule. All right. So, we see that the rule is
already enabled and at any time you could go back and disable this if ever you had um a reason to do so. We can
easily delete it as well or view the details and and edit it as well. So, if we go back to our bucket now, what I've
done is created that prefix with the /logs. Since we're not doing this from the command line, we're going to create
a logs folder over here that will fit that prefix. So, we're going to create logs, create folder, and now we're going
to upload our, let's say, Apache log files in here. So, we're going to upload one demonstration Apache log file that
I've created with just one line in there, of course, just for demonstration purposes. We're going to upload
that. And now we have we're going to just close that. And now we have our our Apache log file in there. So what's
going to happen because we have that life cycle rule in place after 30 days anything any file that has the logs
prefix where basically is placed inside this folder will be transitioned as per that life cycle role policy that we just
created. So congratulations you just created your first S3 life cycle role policy.
Let's now move over to bucket policies. So bucket policies are going to allow or deny access to not only the bucket
itself but the objects within those buckets to either specific users or other services that are inside the AWS
network. Now these policies fall under the category of IM policy. So IM stands for identity and access management and
this is a whole other topic that deals with security at large. So there are no services in AWS which are allowed to
access other services or data for example within S3 without you explicitly allowing it through these IM policies.
So one of the ways we do that is by attaching one of these policies which are written in a JSON format. So it's a
text file that we write at the end of the day. That's the artifact and that's a good thing because we can use that
artifact and we can configuration control it in our source control and version it and put it alongside our
source code. So when we deploy everything, it is part of our deployment package. So in this case here we have
several ways of doing this. We can use what's called the policy generator which is a graphical user interface that
allows us to simply click and point and populate certain text boxes which will then generate that JSON document that
will allow us to attach that to our S3 bucket and that will determine like I said which users or services have access
to uh whatever API actions are available for that resource. So we might say we want certain users to be able just to
list the contents of this bucket, not necessarily be able to delete or upload new objects into that bucket. So you can
get very fine grained permissions based on the kind of actions you want to allow on this resource. So in order to really
bring this home, let's go and perform our very own lab on this. Let's now see how to create an S3 bucket
policy. Going back to our bucket, we're now going to go into permissions. So, the whole point of
coming up with a bucket policy is that we want to control who or what, the what being other services have access to our
bucket and our objects within our bucket. So there are several ways we can go about doing this. Let's edit a bucket
policy. One, we can go and look at a whole bunch of pre-anned examples, which is a good thing to do. Two, we could
actually go in here and code the JSON document ourselves, which is much more difficult, of course. So what we're
going to do is we're going to look at a policy generator which is really a formbbased graphical user interface that
allows us to generate through the answers that we're going to give here the JSON document for us. First question
is we got to select the policy type. Of course we're dealing with S3. So it makes sense for us to create an S3
bucket policy. The two options available to us are allowing or denying access
to our S3 bucket. Now, in this case here, we could get really um fine grained and specify certain kinds of
services or certain kinds of users, but for the demonstration, we're just going to select star, which means anything or
anybody can access this S3 bucket. All right. Now depending on uh also the actions that we're going to allow. So in
this case here we can get very fine grained and we have all these check boxes that we can check off to give
access to certain kind of API action. So we can say we want to give access to you know just deleting the bucket which
obviously is something very powerful. Uh but you can get more fine grain as you can see you have more of the getters
over here. Um, and you have more of the the the listing and the putting new objects in there as well. So you can get
very fine grain. Now for demonstration purposes, we're going to say all action. So this is a very broad and wide ranging
permission. Something that you really should think twice about before doing where it's basically saying we want to
allow everybody and anything any service all API actions on this S3 bucket. So that's no uh small thing. we need to
specify the Amazon resource name, the ARN of that bucket specifically. So, what we're going to do is go back to our
bucket and you can see here uh the bucket ARN. Okay, so we're just going to copy
this, paste it in this policy generator and just say add statement. You can see here kind of a resume of what we just
did and we're going to say generate policy. And this is where it creates for us and make this a little bit bigger for
us. It creates that JSON document. So, we're going to take this, we're going to copy it, and we're going to paste it
into the generator. Okay. Now, of course, we could flip this and change this to a deny, right? which would
basically say we don't want anybody to have access or any thing any other service to have access to this S3
bucket. We could even say slashstar to also encapsulate all the objects within that bucket. So if I save this right
now you have a very ironclad S3 bucket policy which basically denies all access to uh this
bucket and the objects within. Of course, this is on the other side of the spectrum. Very, very secure. So, we
might want to, for example, host a static website through our S3 bucket. So, in this case here, allowing access
would make more sense, right? So, if I save changes, you see that we get an error here saying that we don't have
permissions to do this. And the reason for that is because it realizes that this is extremely permissive. So in
order to give access to every single object within this bucket as in the case that I was stating of a static website
being hosted on your S3 bucket, it would be much better to also at first enable that option. So I'm just going to
duplicate the tab here. And once you go back to the permissions tab, one of the first things that shows
up is this block block public access setting. Right? Right now it's completely blocked and that's what's
stopping us from saving our policy. We would have to go in here, unblock it, and save it. Right? And it's also kind
of like a double clutch feature. You have to confirm that just so you don't do that by accident. Right? So now what
you've effectively done is you've really opened up the floodgates to have public access to this bucket. It's something
that can't be accidentally done. It's kind of like having to perform these two actions before the public access can be
granted. Now, historically, this was something that AWS was
um was guilty of was making it too easy to have public access. So, now we have this double clutch. Now that this is
enabled or turned off, we can now save our changes here successfully. And you could see here that now it's
publicly accessible, which is a big red flag that perhaps this is not something that you're interested in doing. Now, if
you're hosting a public website and you want everybody just to have read access to every single object in your bucket,
yes, this is fine. However, please make sure that you um pay very close attention to this type of access flagged
over here on the console. So, congratulations. You just got introduced to your first bucket policy, a
permissive one, but at least now you know how to go through that graphical user interface through the policy
generator and create them and paste them inside your S3 bucket policy uh pane. So let's continue on with data
encryption. So any data that you place in S3 bucket can be encrypted at rest very
easily using an AES 256 encryption key. So we can have server side encryption. We could have AWS handle all the
encryption for us and the decryption will also be handled by AWS when we request our objects later on. But we
could also have clientside encryption where we the client that are uploading the object have to be responsible for
also passing over our own generated key that will eventually be used by AWS to then encrypt that object on the bucket
side. Of course, once that happens, then the key is discarded. the client key is discarded and you have to be very
mindful that since you've decided to handle your own encryption client side encryption that if ever you lose those
keys well that data is not going to be recoverable in that bucket on the AWS network. So be very careful on that
point. We can also have a very useful feature called versioning which will allow you to have a history of all the
changes of an object over time. So versioning sounds exactly how it's named. Every time you make a
modification to a file and upload that new version to S3, it will have a brand new version ID associated with that. So
over time you get a sort of stack of a history of all the file changes over time. So you can see here at the bottom
you have an ID with all these ones and then an ID with one 121212. So eventually if ever you wanted to revert
back to a previous version, you could do so by uh accessing one of those previous versions. Of course versioning is not an
option that's enabled by default. you have to go ahead and enable that yourself. It is an extremely simple
thing to do. And so there may be a situation where you already have objects within your buckets and you only then
enable versioning. Well, versioning would only apply to the new objects that would get uploaded from the point that
you enabled versioning. the objects that were there before that point will not get a specific version number attached.
In fact, they will have a sort of null marker um version number that will get attached to them. It's only after that
you modify those objects later on and upload a new version that they will get their own version
numbers. So, right now what we're going to be doing is a lab on actual uh versioning. So let's go ahead and do
that right now. In this lab, we're going to see how to enable versioning in our buckets.
Enable versioning is very easy. We're simply going to click on our bucket, go into properties, and there is going to
be a bucket versioning section. Going to click on edit and enable it. Once that's done, any new objects that
are uploaded to that S3 bucket will now benefit from being tracked by a version number. So if you upload objects with
the same file name after that, they'll each have a different version number. So you'll have version tracking, a history
of the changes for that object. Let's actually go there and upload a new file. I'll upload one called
index.html. So, we're going to simulate a situation where we've decided we're going to use an S3 bucket as the source
of our static website to deploy one. And in this index.html file, if you take a look uh right now, let's take a look at
what's in there. You can see that we have welcome to my website and we're at version two. Okay. So if I click on this
file right now and I go to versions I can clearly see version we
have a specific version ID and then we have a sort of history of what's going on here. Now I purposely enabled
versioning before and then try to delete versioning or disable versioning. But here's the thing with
versioning. You cannot disable it fully once it's enabled. You can only suspend it. Right now, suspending means that
whatever version numbers those objects had before you decided to suspend it will remain. So you can see I have an
older version here that has an older version number. And at this point here, I decided to suspend versioning. And so
what it does instead of disabling the entire history, it puts what's called a delete marker. Okay, you could always
roll back to that version if you want. Now in the demonstration when we started it together, I enabled it again. So you
can see this is actually the brand new version number as we did it together, but you don't lose the history of
previous versioning if ever you had suspended it before. So that's something to keep in mind, right? And it'll come
up in the exam where they'll ask you, can you actually disable versioning once it's enabled? And the answer is no. You
can only suspend it and your history is still maintained. Now we have that version there. And let's say I come to
this file and I want to upgrade this. I don't know. I say version three, right? And now what's going to happen is if I
click on this version, just as the current one with version two, and I open this, we should see version two, which
is fine. That's that's that's expected. If we go back to our bucket and upload that new version file that has version
three in there, the one I just modified, we should now see in that index.html HTML file, a brand new
version that was created under the versions tab. And there you go, 1458 just two minutes after. You can see here
we have a brand new version ID, right? And if I open this one, you can see version three. So now
you have a way to enable versioning very easily in your buckets. And you also have seen what happens when you want to
suspend versioning. what happens to the history of those versions files before. Just to actually go back here to the
properties uh where we enabled versioning in the first place. If I want to go back in
here and disable it, like I said, you can't disable. You can only suspend. And that's where that delete marker gets
placed, but all your previous versions retain their version ID. So, don't forget that because that will definitely
be a question on your exam if you're interested in taking the certification exam. So congratulations, you just
learned how to enable versioning. Let's move on to cross region replication or CRR as it is
known. There will be many times when you find yourself with objects in a bucket and you want to share those objects with
another bucket. Now that other bucket could be within the same account, could be within another account within the
same region or could be within a separate account in a different region. So there's varying levels of degree
there. The good thing is is all of those um combinations are available. So CRR, if we're talking about cross region
replication, is really about replicating objects across regions, something that is not enabled by default because that
will incur a replication charge because it's syncing objects across regions. Of course, you are spanning a very wide
area network in that case. So there is a search charge for that. Now, doing so is quite simple to do. But one of the
things that we have to be mindful of is to give permissions for the source bucket which has the
originals to allow for this copying of objects to the destination bucket. So if we're doing this across regions of
course we would have to come up with IM policies and we would also have to exchange credentials in terms of uh IM
user credentials in terms of account ids and and the such. Um, we're going to be doing a demonstration in the same
account in the same region, but largely this would be the same steps if we were going to go cross region. So, this is
something you might find yourself doing if you want to share data with other um entities in your company. Maybe you're a
multinational and you want to uh have all your lock files copied over to another bucket in another region for
another team to analyze to extract business insights from. Or it might just be that you want to aggregate data in a
separate data lake in an S3 bucket in another region or in like I said it could be even in the same region or in
the same account. So it's all about organizing moving data around across objects across these boundaries and
let's actually go through a demonstration and see how we can do CRR. Let's now see how we can perform
cross region replication. We're going to take all the new objects that are going to be uploaded in the SimplyLearn S3
demo bucket and we're going to replicate them into a destination bucket. So, what we're first going to do is create a new
bucket. Okay? And we'll just tack on the number two here. And this will be our destination bucket where all those
objects will be replicated to. We're going to demonstrate this within the same account, but it's the exact same
steps when doing this across regions. One of the requirements when performing cross region replication is to enable
versioning. So if you don't do this, you can do it at a later time, but it is necessary to enable it at some point in
time before coming up with a cross region replication rule. All right, so let me create that bucket. And now after
the bucket is created, I want to go to the source bucket and I want to configure under the managements tab here
a replication rule. So I'm going to create a replication rule call it simply
learn rep rule and I'm going to enable this right off the bat. The source bucket of course
is the SimplyLearn S3 demo. We could apply this to all objects in the bucket or perform a filter. Once again, let's
keep it simple this time and apply to all objects in the bucket. Of course, caveat here. This will only now apply to
any new objects that are uploaded into this source bucket and not the ones that are already pre-existing there. Okay.
Now in terms of the a destination bucket, we want to select the one we just created. So we can choose a bucket
in this account or if we really want to go cross region or another account in another region, we could specify this
and put in the account ID and the bucket name in that other account. So we're going to stay in the same account. We're
going to browse and select the newly created bucket. And we're also going to need permissions
for the source bucket to dump those objects into the destination bucket. So we can either create the role ahead of
time or we can ask this user interface to create a new role for us. So we'll opt for that and we'll skip these
additional uh features over here that we're not going to talk about in this
demonstration. We're just going to save this. So that will create our replication rule that is automatically
enabled for us right now. So let's take a look at the overview here. You can see it's been
enabled. Just to double check, the destination bucket is the demo 2. We're talking about the same region.
And again here we could opt for additional um parameters like different storage classes in the destination
bucket that that object is going to be deposited in etc etc. For now we just created a simple rule. Now if we go back
to the original source bucket which we're in right now and we upload a new file which will be transactions file in
a CSV format. Once this is uploaded that cross region replication
rule will kick in and will eventually right it's not immediate but we'll eventually copy the file inside the demo
2 bucket. Now I know it's not there already. So what I'm going to do is pause the video and come back in 2
minutes and when I click on this the file should be in there. Okay. So let's now double check
and make sure that object has been replicated. And there it is been replicated as per our rule. So
congratulations, you just learned how to perform your first same account S3 bucket region replication
rule. Let's now take a look at transfer acceleration. So transfer acceleration is all about
giving your end users the best possible experience when they're accessing information in your bucket. So you want
to give them the lowest latency possible. You can imagine if you were serving a website
uh and you wanted people to have the lowest latency possible. Of course that's something that's very desirable.
So in terms of traversing long distances, if you have your bucket that is in, for example, the uh US East one
region in the United States in the Virginia region and you had users let's say in London that want to access those
objects, of course they would have to traverse a longer distance than users that were based in the United States.
And so if you wanted to bring those objects closer to them in terms of latency, then we could take advantage of
what's called the Amazon CloudFront delivery network, the CDN network, which extends the AWS backbone by providing
what's called edge locations. So edge locations are really data centers that are placed in major city centers where
our end users mostly are located, more densely populated areas. and your objects will be cached in those
locations. So if we go back to the example of your end users being in London, well they would be accessing a
cached copy of those objects that were stored in the original bucket in the for example US East1 region. Of course you
will get most likely a dramatic performance increase by enabling transfer acceleration. It's very simple
to uh enable this. Just bear in mind that when you do so that you will incur a charge for using this feature. The
best thing to do is to show you how to go ahead and do this. So let's do that right
now. Let's now take a look at how to enable transfer acceleration on our SimplyLearn S3 demo bucket. By simply
going to the properties tab, we can scroll down and look for a heading called transfer acceleration over here
and very simply just enable it. So what does this do? This allows us to take advantage of
what's called the content delivery network, the CDN, which extends the AWS network backbone.
The CDN network is strategically placed into more densely populated areas, for example, major city centers. And so if
your end users are situated in these more densely populated areas, they will reap the benefits of having transfer
acceleration enabled because the latency that they will experience will be severely decreased. So their performance
is going to be enhanced. If we take a look at the speed comparison page for transfer acceleration, we can see that
once the page is finished loading, it's going to do a comparison. It's going to perform, first of all, what's called a
multi-art upload. And it's going to see how fast that upload was done with or
without transfer acceleration enabled. Now, this is relative to where I am running this test. So right now I'm
actually running it from Europe. So you can see that I'm getting very very good results if I would enable transfer
acceleration and my users were based in Virginia. So of course now I have varying
uh differences in percentage as I go closer or further away from my region where my bucket or my browser is being
um is being referenced. So you can see here United States I'm getting pretty good uh percentages. As I go closer to
Europe it gets lower of course but still very very good. Frankfurt again this is about as probably at the worst I'm going
to be getting here since I'm situated in Europe. And of course as I go look more towards you know the Asian regions you
can see once again it kind of scales up in terms of better performance. So, of course, this is an optional feature once
you enable it as I just showed over here. Um, this is a feature that you pay additionally for. So, bear that in mind.
Make sure that you take a look at the pricing page in order to figure out how much this is going to cost you. So, that
is it. Congratulations. You just learned how to simply enable transfer acceleration to lower the latency from
the end user's point of view. We're now ready to wrap things up in our conclusion and go over at a very
high level what we just spoke about. So we talked about what S3 is, which is a core service, one of the original
services published by AWS in order for us to have unlimited object storage in a secure, scalable, and durable fashion.
We took a look at other aspects of S3 in terms of the benefits. We mainly focused on the cost savings that we can attain
in S3 by looking at different storage classes. Now, of course, S3 is industry recognized as one of the cheapest object
storage um services out there that has the most features available. We saw what goes into the object storage in terms of
creating first our buckets which are our containers highle containers in order for us to store our objects in. Again,
objects are really an abstraction of the type of data that are in there as well as the metadata associated with those
objects. We took a look at the different storage tiers. The default being the standard
all the way till the cheapest one which is the glacier which are meant for longterm archived objects. For example,
log files that you may hold on to for a couple of years may not need to access routinely and will have the cheapest
pricing option uh by far. So we have many pricing tiers and if you want to transition from one tier to the next you
would implement a life cycle policy or use the intelligent tiering option that can do much of this for you. We took a
look at some very interesting features starting from the life cycle management policies that we just talked about all
the way to versioning cross region replication and transfer acceleration. So with this conclusion, you are now
ready to at least start working with S3. Hello everybody, my name is Kent and today we're going to be covering AWS
identity and access management also known as IM tutorial by simply learn. So these are the topics which we'll be
covering today. We'll be defining what AWS security is, the different types of security in AWS, what exactly is
identity and access management, the benefits of identity and access management, how IM
works, its components, its features, and at the end we're going to do a great demo on IM with multiffactor
authentication and users and groups. So without any further ado, let's get started. So let's get started with what
is AWS security. Your organization may span many regions or many accounts and you
may encompass hundreds if not thousands of different types of resources that need to be secured. And therefore,
you're going to need a way to secure all that sensitive data in a consistent way across all those accounts in your
organization while meeting any compliancy and confidentiality standards that have to be met. So, for example, if
you're dealing with healthcare data or credit card information or personal identification information like
addresses, this is something that needs to be thought out across your organization. So of course we have
individual services in AWS which we will explore in this video. However, in order to govern the whole process at an
organizational level, we have AWS security hub. Now AWS security hub is known as a CSPM which stands for a cloud
security posture management tool. So what does that really mean? Well, it's going to encompass, like I said, all
these tools underneath the hood in order to bring a streamlined way for you to organize and adhere to these standards
across organization. It'll identify any misconfiguration issues and complant risks by continuously monitoring your
cloud infrastructure for the gaps in your security policy enforcements. Now, why is that important? Well, these
misconfigurations can lead to unwanted data breaches and also uh data leakages. So, in order to govern this at a very
high level, like I said, we need to incorporate AWS security hub that will automate and manage all these underlying
services for us. In order to perhaps take automated action, we call that remediary actions. And you can approve
these actions manually or you can automate those actions as you see fit across your organization. So let's delve
a little bit deeper into what is AWS security and then we'll start looking individually at some of these
services. And so our organization has special needs across different projects and environments. So of course the
production development test environments all are going to have their own unique needs. However, it doesn't matter which
project or which business unit we're talking about and what are their storage backups needs. They should all be
implemented in a concise standardized way across your organization to meet that compliancy we were just talking
about. So, we want to automate all these tedious manual tasks that we've been doing like ensuring that perhaps our
buckets in S3 are all encrypted or all our EBS volumes are encrypted and we want to have an automated process in
place in order to give us time on other aspects of our business perhaps our application
development to bring better business value and that allow us to grow and innovate the company and in a way that's
best suited for it. So, let's take a look at the different types of AWS security. Now, there are many different
types of uh security services out there. However, we're going to concentrate primarily on IM in this video tutorial.
So, I like to think of IM as the glue between all AWS services because by default, we don't have permission for
any one service to communicate with another. So, let's just take for example an EC2 instance that wants to retrieve
an object from S3. Well, those two services could not interact unless we interacted with IM. So, something that's
extremely important to learn and by the end of this video tutorial, you'll be able to understand what IM is. We have
Amazon Guard Duty here, which is all about logs. Basically aggregates all logs from example cloud trail which we
still haven't talked about. It's at the end of this list over here. But instead of looking at cloud trail um
individually, guard duty will actually take a look at trail um cloud trail will take a look at your VPC flow logs and
we'll take a look at your DNS logs and monitor that account and your network and your data access via machine
learning algorithms and it'll id threat where um you can automatically remediate the workflow that you've approved. So
for example, if you know of a malicious IP address that's making API calls, well those API calls are registered in cloud
trail, right? So this machine learning algorithm will be able to detect that and and take action. So this is really
governed at a higher level than you having to, you know, individually inspect all your DNS logs, flow logs,
and cloud trails individually and take action through some scripting fashion that that you've come up with. So guard
duty manages all that. We have Amazon Macy which once again uses machine learning but also uses pattern matching.
Pattern matching can be used with um matching uh kind of libraries that are already in place or you can come up with
your own pattern matching and that's used to discover and protect your sensitive data. So for example, if you
had healthcare data, also known as HIPPA compliancy or credit card data that's lying around, well, Macy will discover
it and will protect it. So as your data grows across your organization, that might be harder and harder to do in your
own way. So Macy really facilitates uh discovering and protecting that. So you can concentrate on other things. AWS
config is something that kind of works in tandem with all the other uh services. It's uh able to continuously
monitor your resources configurations and ensure that they match your desired configuration. So for example um maybe
you state that your S3 bucket should be encrypted by default all of them or you want to make sure that your IM user
access keys are rotated and if they're not then take remediary action. So a lot of these automated remediary actions
right are basically executed via AWS config. So you'll see AWS config actually used by other services
underneath the hood. And then you have cloud trail. Cloud trail is all about logging every single type of API call
that's made. So it'll um it'll always record the API call who made it from what source IP um the parameters that
were sent with the API. um the response all that data which is really a gold mine for you to investigate if there's
any security threat and again this trail over here and there can be many and there's lots and lots of data generated
by these trails can be analyzed by these other services uh specifically guard duty here that automates that process so
you don't have to so let's get to the next one which is what is identity
and access management. So what is IM? Like I said, IM is the glue between all AWS services. It will allow those
services to communicate with each other once you put IM into place. The other thing is is it allow us to manage our
AWS users which are known as IM users and group them together into groups to facilitate assigning and retrieving or
removing permissions from them instead of doing it on an individual scale. So of course if you had 200 IM users and
you wanted in one shot to add a permission to a group that contained 200 IM users, well you can do that in one
operation instead of 200. Now we do have a distinction between IM users and end users. End users use
applications online whereas IM users are your employees in your organization that are interacting directly with AWS
resources. So for example an EC2 instance would be a resource that let's say a developer would be interacting
with and traditionally those are resources that are used full-time from 9 to5 let's say uh 5 days a week 365 days
a year. Okay. Now, sometimes those users have to have elevated permissions for a temporary amount of time and that is
more suited for a role and that will eliminate the need to create separate accounts just for you know type of
actions that are needed for let's say an hour a month or two hours a week or something like that. Something like
backing up an EC2 instance, removing some files, doing some cleanups. Traditionally, you don't have those
permissions as a developer, but you can be given temporary permissions to assume this role. So, it's kind of like putting
a hat on your head and pretending you're somebody that you're usually not for a temporary amount of time. Once you take
off the hat, you go back to being your old boring IM user self, which doesn't have access to performing backups or
cleanups and stuff like that. So the roles interact with a service called a secure token service which gives us
specific keys three um specifically an access key a secret key much like a username and password and then we have a
token key which only gives you access to this role this elevated permission for 1 to 12 hours. So, we'll talk more about
roles as we go on through this video tutorial, but it's important that you at least remember at this point, even if
you don't know all the details, that roles have to do with temporary permissions, elevated
permissions. So, what are the benefits of IM? Well, across our organization, we
are going to have many accounts and like I said, hundreds if not thousands of resources. So scalability is going to be
an issue if we don't have a very high level uh visibility and control over the entire security process. And once we
have a tool like for example security hub which we saw on the first slide we can continuously monitor and maintain a
standard across our organization. So very very important to have that visibility and we do when we integrate
with AWS security. Now, we also need to eliminate the human factor, right? So, um we don't want to manually put out
fires every single time. Of course, there will be times when something new occurs that we've never seen before that
that you know will be uh needed. However, once an occurrence happens and reoccurs, we can obviously come up with
a plan to remediate the action. So once we automate, we can definitely reduce the time to fix that reoccurring error
or it could be a new error that we don't have to interact with because we're using machine learning algorithm
services like I was talking about like our duty or Macy and we can really reduce the risk of um security
intrusions and data leakage and such by using IM to facilitate this. Of course, you may have many compliance needs. You
may be dealing uh with applications that use uh healthcare data or credit card information for payments. Or um you
might be dealing with a government agency, let's say in the US that has certain compliancy requirements that
needs asurances that if their data is stored on the cloud that you're still following the same compliance controls
that you were let's say on premise. So we have a very very long list of compliance requirements that each AWS
service adheres to and you can go on the um AWS documentation online and you could figure out if for example Dynamob
is compliant with HIPPA uh compliancy and it is and so you you can be assured that you can use that. So AWS itself has
to constantly maintain these compliancy controls in place and they themselves have to pass all these certifications
and that gets passed on to us. So much less work for us to implement these compliance controls. We can just inherit
them by using AWS security model by IM. And last but not least, we can build the highest standards for privacy and
data security. Now having all this on our own on premise data center uh poses security challenges especially physical
security challenges. You have to physically secure the data center. You might have security personnel that you
need to hire uh 247 camera surveillance etc etc. So just by migrating to the cloud we can take advantage of AWS's
global infrastructure. They're very very good at building data centers that's part of their business and securing
them. So of course we have a shared security model in place. However, you can rest assured that at least as the
lowest common denominator uh physical security has been put into place for us. Again, as other services are used, more
managed services or more higherend services used in AWS, more and more of that security model will be AWS's
responsibility and less yours. We will always have a certain type of responsibility for our applications. But
we can again use these highle services to ensure that our ability to encrypt the data at rest and in transit are
always maintained by coming up with specific rules that need to be adhered to. And if those rules are not adhered
to then we have remediary actions that take place in order to maintain again that um cross account or
organizationalwide um security control. Okay. So lots of benefits of using IM and let's now take
a look at exactly how IM works and delve deeper into authentication and
authorization. So there are certain terms here that we need to know about. One of them is principle. Now principal
is nothing more than an entity that is trying to interact with an AWS service. For example, an EC2 instance or an S3
bucket. Now that could be a user, an IM user. could be even a service or it could be a role. So we're going to take
a look how principles need to be specified um inside IM policies.
Authentication. So authentication is exactly how it sounds. Who are you could be something as simple as username and
password or it could involve an email address. Example, if you're the root user of the account when you're creating
the account for the very first time. um other normal users will not need to log in with their user uh email. It'll only
be the root user that needs that. We could also have developer type access which needs access let's say via command
line interface or a software development kit also known as an SDK. Now for those kind of access points we're going to
need uh access keys and secret keys or public private keys. Let's say if you want to even authenticate and log in
through SSH to an EC2 instance. So there are many different types of authentication that we could set up when
creating IM users based on what kind of access they need. Do they just need access to the console? Do they need
access to just services in terms of as a programmatic needs? Um so those are different types of
authentication. Then we have the actual request. Now, we could make a request in AWS through various ways. We're going to
be exploring that via the console, the AWS console. However, every button you click, every drop-down list you select
that invokes an API call behind the scenes. So, everything goes behind a centralized well doumented API. Every
service has an API. So, if you're dealing with Lambda, let's say, well, Lambda has
an API. If you're dealing with EC2, EC2 has an API. Now, you can interact with that API, like I said, via the console
indirectly. You can use a command line interface. You can use an SDK for your favorite programming language like
Python. Um, so all of them get funneled through all your requests get funneled through an
API. But that doesn't mean you're allowed to do everything just because we know who you are and you have uh access
to some keys, let's say, to make some API calls. you have to be authorized in order to perform certain actions. So
maybe you're only authorized to communicate with DynamoB and also maybe you're only authorized to perform reads
and not write. So we can get some very fine grained authorization through some IM policies in order to control not only
who has access to our AWS but what they're allowed to do. So very fine grained actions read actions write
actions both uh maybe just describe or list some buckets. So depending on the AWS user that's logged in, we can
control exactly what actions because every API has fine grain actions that we can control through IO and of course
many many different types of resource. When we're coming up with these policies, all these actions can be
grouped into resources. So perhaps we can say well this user only has access or read only access to S3 specifically
these buckets and can write to Dynamob table but cannot terminate or shut down an EC2
instance. So all these kind of actions can be grouped into resources on a per resource basis based on the kind of role
you have in uh the projects or in the company. So now let's take a look at the components of
IM. We've already seen that we can create IM users and that how they are different than your normal end users.
Now those entities represent a person that's working for organization that can interact with AWS services and some of
those permissions that we assign through what's called identity based policies to such a user will have permissions for
example that are always necessary every time that they're logged in. Or perhaps this user Tom always needs access to the
following services over here. For example, an RDS instance or an S3 bucket etc etc. And sometimes they will need
temporary access to other services for example DynamoB. So in those cases there would
be a combination of normal IM policies assigned to the user and certain assume role calls done at runtime in order to
acquire this temporary credential elevated permission. There is also the root user which is different than your
typical super user administrator access. There are some things that a root user can do that an administrator access
cannot. For example, a root user first of all logs in with their email address and can also have the option to change
that email address. Something that is very tightly coupled to the AWS account. So you cannot change the email address
with an administrator account. The other thing your user can do is they can change the support contract. They can
also uh view billing information or taxes information that information for example. So the best thing to do, the
best practice is that once you've created your AWS account, you are obviously the root user. The best thing
to do is to enable multiffactor authentication. uh store away those keys in a secure
location and your first administrative task should be to create an administrator super user from which that
point on you will log in as only an administrative user and you will create other type of administrative users and
those administrative users will create other I am users. So never log into your AWS account as a root user unless you
need to access that specific functionality that I said that is over and above a super user administrator
account. So here we have a set of IM users and we could assign to them directly IM policies which give them
access to certain services. However, the best practice is to create groups which is just a collection of IM users and
suppose we have a developer group and a group of uh security administrators. Of course, we could assign different types
of policies to that group and every user that is assigned to the developer group will inherit those IM permissions. makes
it much easier to manage your IM users this way for um and for the fact that also you can have users that can be
assigned to groups as you can see here but what's not shown here that is possible as well is you could have a
user that is part of two groups at the same time the thing that you cannot have though is the notion of subgroup so I
can't extend this group and create a smaller um group of developers out of this big group here. So we don't have
hierarchal groups but we do allow users to partake in different uh groups at the same time. So they will
inherit those permissions those IM policy permissions from for example if user A was part of developers and user A
was part of security they would inherit these two policy uh statements over here. Now there is another type of IM
policy and that is called a resource-based policy. So resource-based policies are not attached directly to
for example users but they're attached more to uh resources like um an S3 bucket for example and so when you have
such a case a user group can't be designated uh as a principle to a group that's just one of the restrictions
however there are other ways to uh get around that as I said when you think of an I am
role I want you to think that these are temporary credentials s that are required at runtime and are only good
for a specific amount of time. Now, an IM role is really made up of two components. We have what's called
a trust policy. So, who do you trust to assume this role? Who do you trust to even make an API call to assume the
role? After that, we have an IM policy. Once you've been trusted to assume the role and you get those temporary tokens,
what are you allowed to do? Maybe you're allowed to have read write access to the Dynamob.
So here we have no long-term credentials that are connected to RO. You do not assign a role to a user. The user or
service will assume the role via API call via the secure token service STS. And with those temporary credentials,
we'll then be able to access, for example, an EC2 instance if the IM policy attached to the IM RO says that
you can do so. So your users and your applications perhaps running on an EC2 instance and
the AW AWS services themselves don't normally have access to the AWS resources but through assuming a role
they can dynamically attain those permissions at runtime. So, it's a very flexible model
that allow you to bypass the need to embed security credentials all over the place and then have to maintain those
credentials and make sure they're rotated and they're not acquired by anybody trying to find a security hole
in your system. So, from a maintenance point of view, they're really an excellent tool to use in your toolbox,
your security toolbox. So let's take a look at the components of IM. We have the I am policy over here
which is really just a JSON document and that JSON document will describe what this user can do if it's attached
directly to it or if the IM policy is attached to the group. Whatever user is attached to that group will also have
the same policy inherited. So these are more long-term credentials in terms of uh or permissions rather that are
attached to your login for the duration of your login whereas roles are as we've already stated temporary and again once
you've established a trust relationship then we can assign a policy for example a policy that gives you only read only
access to a cloud trail to a role. Now it's up to the user to assume that role either through the console or through
some programmatic means whichever way is performed the user will have to make a direct or indirect call to STS for the
assume role call. Now these permissions are examined by AWS when a user submits a request. So
this is all happens at runtime dynamically. So when you change a policy, it will take effect pretty much
immediately as soon as the user makes the next request, that new security policy, whether you've just added or
removed the permission will take immediate effect. So what we're going to do now is just really quickly go to the
AWS console and actually show you how an IM policy looks like. So, I've just logged into my AWS
management console. And if I search for identity and access management should be the first one to show up. I'll just
click on that. And I want to show you actually how these IM policies look like. If you go to the left hand side
here, there's an entry for policies. And there are different types of policies that we can filter out. So, let's filter
them out by type. We have customer managed, we have AWS managed and more in line with job functions that AWS managed
also. So we're going to take a look at AWS managed. All that means is that AWS has created and will actually update
maintain these pre-established and already vetted um identity policies IM identity policies. So I could at further
down maybe look at all the S3 policies that are available and we can see there are some that allows only read only
access. There are some that will give us full access and there are of course other types here that we're not all
going to explore. But let's just easily kind of take a look here at the full access. You can see it's just a a JSON
document and of course you cannot modify it since this is a managed policy. However, you can copy it and modify it
and in that case it will become a a a your own version of it and so it'll be customer defined and so when you go here
and you u do a filter you can filter on your own customer uh managed policies. So over here we are simply
allowing certain actions right we talked about actions before that were based on API. So here we have an API that starts
with S3. So this is for the S3 API and instead of filtering out exactly specifically what API calls are there
we're basically saying all API calls. So star representing all API actions and again there is another category of S3
which deals with object lambda and we are uh allowing all operations on that specific API as well. We could if this
was our own customer managed ident uh policy actually scale down on which resource which bucket
let's say or which folder in a bucket uh this would apply to. But in this case since this is a full access we're
basically um being given permission to execute every single API action under S3 for all buckets. We're going to see in
the lab or the demonstration at the end of this tutorial that uh I'm going to show
you how to scale that down. Okay. So, that's how an IM policy looks like. And if you want to assign an IM policy to a
user, well, stay tuned for the demonstration at the end of the video tutorial. What's not at the end of the
video tutorial is really how to create a role. So, I'll show you what a role is here. Again, there's a whole bunch of
roles already uh set up that we can go and and take a look at here. But we can go and create our own role as well. And
I might create a role for example that gives uh in EC2, let's choose choose EC2 access to S3. So
if I click uh EC2 and click next permissions, I can actually assign the S3
permission right over here. just an a policy just like we took a look at. Obviously assign it a tag and review it.
And I'm just going to call this my EC2 RO to S3. Create the
RO and because I selected EC2 as a uh service, it automatically will include that in the trust policy. So now when I
go and I create an EC2 instance, I can attach this role to that EC2 instance and that EC2 instance will automatically
be able to if we're using the Linux 2MI, let's say, have access to or full admin access to any S3 bucket uh in the AWS
account. So let's go take a look at where we would actually attach that role on the EC2.
So when you're creating an EC2, let's just say we go here, we say launch an EC2
instance and we'll pick this AMI over here because it'll already include the uh packages necessary to perform the
assume role call to STS for us. So we won't have to code that and we don't have to embed any credential keys uh in
order to get this running which is really good which is the whole point of what I'm doing because by assuming a
role I don't have to manage that and I'm just going to pick the free T2 micro and it is here once you you know pick
whatever you need whatever VPC and whatnot that you come to the IM RO and you select the role to be assumed
dynamically. Now I have a whole bunch of them here but this is the one that I had created here my EC2 role to S3 and if I
go and I launch this EC2 instance and then I had an application let's say that needed to contact S3 or even if I went
on the command line of the EC2 instance I would be able now to communicate with the S3 service because they would use
they being AWS would actually use this role to perform an assume role call to get those temporary STS tokens for me
which will then allow me to access my IM policy which gives me full access to S3 and off I go. So this is a great way
like I said to avoid having to administrate any embedded keys in this EC2 instance and I could take this role
away at any time as well. So there you have it, a little demonstration on how to create a role, how to attach it to an
EC2 instance and also how manage and customer managed policies uh look like in
IM. Of course, there is also the fact that we have multiple accounts and so we could assign a policy that will also
allow a user in another account access to our account. So cross account access is something that's very um very well
needed and also is a feature of IM. So when you need to share access to uh resources from one account to another in
your organization uh we are uh in need of an IM policy to facilitate that. So also roles can come into play here for
that. By also assigning granular permissions, we can make sure that we implement the concept or the best
practice of the um least privilege access and also we have secure access to AWS resources because by default again
we have the principle of lease privilege and no service can communicate with another until we enable it via IM. So
very very secure out of the box. Of course, if you want to add a additional layer of authentication of security, we
have malter multiffactor authentication rather and that'll allow you to ensure through a hardware device perhaps let's
say a hardware token device or your cell phone that has some software that generates a code that will allow to
identify uh you are actually the person that uh is typing the username and password and that's somebody who's
actually stolen your username and password. So at the end of this video tutorial, we will uh show you or I will
show you how to enable that. And of course, identity federation is a big thing where your users may have been
defined outside of AWS. You may have thousands of users that you've already defined, let's say, in an active
directory or in an LDAP environment on prem and you don't want to recreate all those users again in AWS as I am users.
So we can tie in we could link the users uh your user database that's on prem or even users that have been uh your user
account that's been uh defined let's say uh on a social site like um Facebook or Twitter or Google and tie that into your
AWS account and they'll there is an an exchange of information there is a setup to do but uh at the end of the day it is
either a role in combination with an I um a policy that will allow to map what you're allowed to do once you've been
authenticated by an external system. So the this has to do with uh perhaps uh like I said LDAP or active directory and
you can tie that in with AWS SSO. Uh there are many different services here that can be leveraged to facilitate what
we call identity federation. And of course, IM itself is free. So it's not a service per API
request. AWS is very concerned to put security first. And so IM is free. It's just the services that you use. Uh for
example, if you're securing an EC2, of course, that EC2 instance falls under the current pricing plan. So of course,
compliance, like I mentioned, is extremely important, making sure that it's business as usual in the cloud. And
so, for example, if you were using the payment card industry uh data security standard to make sure
that you're uh storing and processing and transmitted credit card information appropriately uh in order to do your
business well, you can be assured that the services you use in AWS are PCIDSS compliance and there's many many
other types of compliance that AWS adheres to as well. So, password policies of course are important. By
default, we do have a password policy in place. But when you go to the IM console, you're free to update that
password policy, make it more restrictive based on whatever uh policy uh you have at your
company. So many features. And so I think we're ready now to go into a full demonstration and I will show you how to
incorporate IM users groups and also multiffactor authentication. So get ready. Let's do
it. I'm going to demonstrate now how to attach an S3 bucket policy via IM. I've already created a bucket called
SimplyLearn S3 IM demo. And what we're going to do is we're going to attach a bucket policy right over here. We're
going to have to edit it of course. And the basis of the demonstration will be to allow a user that has MFA, so
multiffactor authentication set up to have access to the contents of a folder within this bucket. So of course now I
have to create a folder. just simply call it folder one. Create that folder. And now I'm
going to go into that folder and upload a file so that we can actually see uh this in action. So I'm going to select a
demo file that I prepared in advance called demo file user 2 MFA. I'm going to upload
that. All right. And now what's going to happen is eventually when I create our two IM users, one will have access to
view the contents of this file which should look like something very simple. If I open this up within the console,
you'll be able to see well it's pretty small right now. I'll make this a little bit bigger. Of course, it says this is a
demo file which user 2- MFA should only be able to have access to. So what's left to do now is to create two users.
one called user one and one called user 2- MFA. And that's what we're going to do right
now. Let's head over to IM identity and access management. And on the left hand side
here, we're going to click on users. You can see here I don't have much going on. I just got an
administrator user. So I want to create two new users. We'll start with one at a time. So one called user one. Now by
default we have um you know the principle of lease privilege which means this user is not allowed to do anything
unless we give them access to either the console right over here which I will do right now and also if they were a kind
of programmatic user that needed access to um specific keys in order to interact with uh the APIs via the command line
interface or the software development kit. for example, if you were a Java developer. So in this case, that is not
what I want. So I'm just going to assign console access and I'm also going to create a default password for this
user, but I will not allow them to reset it on uh the first login just for demonstration purposes. Now over here,
we have the chance to add the user straight to a group, but I'll only do that later on at the end of the
demonstration. For now, I want to show you how to attach to this user an I am policy which will give them access to
specific services. So if I type in S3 full access here, this is an administrative access to an S3 bucket.
If you take a look at the JSON document attached or representing that IM policy, you can see that it is
allowing all API actions across S3 star and also object lambda across all buckets. And I say all buckets because
the star represents all buckets. So it is a very broad permission. So be very careful when you're actually assigning
this to one of your users. Make sure that they deserve to have such a permission. So I'm not going to assign
any tags here, but I'm just going to review what I've done. You can see here I've just attached an S3 full access
permission to this one user and I'm going to create that user. there. Of course, now is the time for me
to download the CSV file, which will contain those credentials. If I don't do it now, I will not have a second chance
to do this. And also to send an email to this user in order for them to have a link to the console and also some
instructions. I'm not going to bother with that. I'm just going to say close. And now I have my user one. You can see
here that it says none for multiffactor authentication. So I have not set anything up right now for them. And
that's what I want to do for this second user that I want to create called user 2 dash MFA or MFA rather. All right. So
now I want to do the same exact thing as before. I'm going to assign a custom password and I'm going to go through the
same steps as we saw before. I'm going to attach an existing managed policy that AWS has
already vetted for us. Review. Create. Now, right now, they're exactly the same. So, I have to
go back to user 2 and set up multiffactor authentication. And to do that, you have
to go in the security credentials tab. And you'll see here assigned MFA device. We have not assigned any yet.
So, we're going to manage that right now. And we have the option to order and get an actual physical device that's
uniquely made for performing multiffactor authentication. However, we're going to opt for the virtual MFA
device, which is a software that we're going to install on our cell phones, let's say. So, I can actually open this
link just to show you the um very many options that we have for that. right over here. I'm going to go with Google
Authenticator. So, go to the App Store on your Android device and you can download that. Of course, if you have an
Android device, if not, you have other options for iPhone and other softwares for each one of those. So, I'll let you
do that. Once you've installed that software on your cell phone, you can come here and show the QR code. Now,
what that is going to do is you're going to need to go to your phone now. And I'm going to my phone right now as we speak.
And I'm gonna open up that MFA software. In this case, the Google Authenticator. And I'm gonna say there's a little plus
sign at the bottom. I'm gonna say scan QR code. I'm going to point my phone to this square code here, this QR code. And
it's going to pick it up. I'm going to say add account. There's a little button there, add account, and it's going to
give me a code. So, I'm going to enter that code right now. So once I've entered the code, I have to
wait for a second code. And that's because this code on the screen of my cell phone is going to expire. It's only
there for about 20 to 30 seconds. Once that token expires, it refreshes the display with another token. So I have
another token now. That's only good for a couple of seconds again. And now I'm going to say assign MFA. What this does
is it completes the link between my personal cell phone and this user right now. So now that this has been set up,
we can see we actually have an ARN to this device which we are going to possibly need depending on how we write
our policies. In our case here, I'm going to show you a way to kind of bypass having to put that. So now we've
got those two users, one with an MFA device and another one without. Okay. So now the lab is going to be or the
demonstration rather is going to be how do we allow user to MFA access
to this file in folder one and not allow user one. The distinguishing factor will be that user two has an MFA setup and
user one doesn't. So based off of just that condition, that'll be the determining factor if they can view the
file or not. So let's get back to S3 and actually create this IM bucket policy and see how it's
done. So back we are in our bucket and now we have to go to permissions in order to set up this bucket policy.
We're going to click on edit over here and we're going to click on policy generator to help us generate that JSON
file through a user interface. Of course, we're dealing with an S3 bucket policy. Now, what we're going to do is
we're going to deny all uh users and services. So, star uh S3 is grayed out here because we
selected S3 up here. We want to deny all S3 API actions. We're not going to go and selectively go down the list the
specific bucket and the folder. So in this case here, we're missing the Amazon resource name. So we have to go back
here and we have to copy the ARN of our bucket, right? So we're going to copy that. Come
back. Control V. And we're going to of course have to specify the folder as well. So I'm going to just do
slashfolder one and then I'm also going to do slash star for all the objects in that folder. Okay. So here's the MFA
condition that I want to add though because we cannot assign the MFA condition up here. We have to say add
conditions. Very powerful feature here. That's optional but very powerful. We want to look for a condition that says
boolean if exists or bool if exists. All right, so there's got to be a key value pair that exists. And there are many,
but we're going to look for one that is specifically known as multiffactor authentication present. And we want to
make sure that value is equal to false. So I'm going to add that condition. And then I'm going to
add that statement. and I want to generate the policy just to show you now how that user interface has generated
this uh JSON document for us that we're going to attach to our S3 bucket. What I want to do is I want to take this in its
entirety, copy it, and go back to our S3 bucket that's just waiting for us to paste our policy in here. Okay, so let's
see what's actually going on here. We've created a policy. Again, this policy could have any ID. It's just a an ID
that was given by the um user interface. You could name that whatever you want. Of course, something that makes total
sense. And then we have one or more statements. And then we only have actually one statement. And again, you
can select any ID here. That was just autogenerated by the IDE. We're saying that all actions on the simple storage
service. I say all again because of the star are denied specifically on this resource
which is our bucket and the folder named folder one and all the objects within that folder
under the certain conditions. So the condition is if you have a boolean value of false associated with a key called
multiffactor authentication present and it has to exist. Okay. Then if that value is equal to false, you don't have
access to the objects in this folder. Which means that if you do have MFA set up, you will
be allowed access to the contents of this folder. So if we assign this and apply this to all principles which would
encompass user one and user 2- MFA, then let me just apply this. That would mean that user one would not be
able to see the contents of our object inside folder one and user 2-mfa would. So, let's go take a look and log
into let's log out first and then when we're going to log back in, we're going to log in as the individual user one and
user 2 MFA and prove this is actually the case. So, I am now logging back in as
user one that we created moments ago. And of course, if I for example go see a service like EC2, we don't have
access to EC2. We've just given ourselves access to S3. So this makes total sense. We're just going to go see
what we have access to, which is S3. Should be able to list and describe all the buckets there because we've been
given pretty much administrative access. Now, if we go here to our created bucket and go inside folder one and want to
actually open up this document, we can see now that that's not happening. Okay. And that's because of that condition
that is checking if we actually have a value of of true for that um MFA key that we specified. Okay. So, this is
expected and this is exactly uh what we wanted. Now, what we're going to do is we're going to log out once again and
we're going to do the same thing but with user 2- MFA. Let me log back
in and then specify the new password we created. Now, of course, now I have an MFA code,
right? Because I've set that up for user 2. So, back on my cell phone I go. I am opening up the Google Authentic
application. And I have in front of me a new Amazon Web Services code that is only good for, like I said, maybe 20
seconds. It's a little nerve-wracking. If you're if you're not good under pressure, you're going to choke.
So, I've just entered the code and I have to wait for um you know that to actually work. Now, it looks like I
actually didn't check off um not to recreate the the password. So, unfortunately,
uh I must have missed that check mark. So, I wanted to kind of avoid this here. So, I'm going to start over
and create a brand new password. little painful for, you know, demonstrations, but it's good for you
guys to see how that actually looks like. So, because I've set up my MFA device properly, I am able to log in
after I enter that MFA code. And once again, if I go to EC2, now I haven't been given the
permissions for EC2. I've only been given the permissions for S3. So, nothing changes here. I'm going to go to
S3 and I'm going to go into folder one of our bucket in question and I'm going to see if I have access now to
opening up and seeing the contents and here I have the contents. This is a demo file which user 2FA should only be able
to have access to. So that worked. So now you guys know how to create a bucket permission
policy. Uh that is also known as a resource policy because it's assigned to the resources assigned to the bucket.
And let me actually show it to you once one last time. Here it is. And that'll only allow us to have access to the
contents of folder one if we have multiffactor authentication set up in our account. So that's really good. The
last thing I want to show you now is how to maybe uh create IM users uh in a better way in terms of not having to
assign over and over again the same permissions like you saw me do for user one and user two. Uh of course now I
cannot do anything I am. Why? Because I'm user I'm user two. I don't have access to IM. So once again, I'm logging
out and we're going to actually see the best practice in terms of how do I assign IM policies to a group and then
assign users to that group in order to inherit those permission policies. So let's get to it. Let's sign out
first. So let us go back to IM and we're going to be creating our first group. We're going to simply create the
button or click on the button create group and we'll come up with a group called
testers. We have the ability to add existing users to this group. So we will add for example user one only uh to the
testers group and we can also add permissions to this group. So, we're going to go here and add, for example,
the EC2 full permissions, create
group. And what's going to happen now, if we go take a look at testers, we can see we have that one user that we added
and we also have the permissions associated with that group. So this means now that any new users we create
will automatically inherit whatever permissions the group has which is really good for maintainability. So for
example let's say we were to create many new testers. So let's say user three over here. Now of course I'm going to
have to go in this case through the same steps as I did before. I'm just going to remember this
time not to forget to check uncheck that. And instead of attaching policies directly, I'm going to say, well, I want
to inherit the policy permissions that were assigned to the group. So, I'm going to say this user is going to be
part of the testers group. And of course, I'm just going to skip through the remaining repetitive tasks that I've
already described. And this user three now is already going to have the
permissions to access EC2 that they've gained through the group
testers. Whereas if we go back to user one, this user also has the permissions from the group but
has a unique permission added to itself. So in this case you do have the option of doing
that. You have the option of adding a permission over and above that is assigned to the group to yourself
directly. So this is really good um adding permissions to groups when you have of course lots of users in your
company in your organization and instead of going to them and adding permissions after permission you can centrally
access this in one or manage this in one place. So, for example, if I go back to my roles
uh groups rather in my testers and let's say I forgot to add a permission to all the users in this group. So, let's say
there was 20 users attached to this group. Okay, I'm not going to bore you with creating 20 users. We have we have
two now. That's good enough to show the demonstration. And let's say now I need to add another permission, right? So in
this case here I'm going to add a permission uh for example let's say based on
Dynamob right so I'm going to give full access to Dynamob going to add that permission and what's going to happen
now is if I go back to my users for example user 3 I'm going to see that user 3 automatically has inherited that
permission and so has user one, right? Show two more. And there they are. So in this case, it really
allows for you to easily manage a lot of users in that group because you can simply go to one place which is the
group and add and remove policies as you see fit and the whole team will inherit that. So it saves you lots of work and
that is the best practice. So there you have it guys. You have learned in this demonstration how to create users, how
to create groups, how to add users to groups and manage their policies in terms of best practice. And you've also
learned how to attach to an S3 bucket a policy that allows you to permit access based off of if a user has multiffactor
authentication set up. and we showed you how to actually set that up when creating an IM user. So again,
permissions tab is where this all happens. Hope you enjoyed that. Put that to good use and I'll see you in the next
demonstration. So I hope you enjoyed that demonstration. Uh we're just going to wrap things up in a summary and kind
of see what was looked at throughout the tutorial. We first started with what is AWS security. We took a look at how
important it was across our organization to maintain best practice and also uh standardize that practice across
accounts organization. We took a look at the topmost services in which we concentrated then after on IM and we
took a look at what exactly what IM was in terms of IM users, groups, long-term credentials that are applied as IM
policies and then the concept of a role. We took a look at the benefits of IM and the actual terminology used with working
with IM. Of course, principal being an entity that is a either a user or a service itself that can gain access to
an AWS resources. And we actually saw the different types of authentication and
what we can do to implement authorization via IM policies and also through the use of roles. So we took a
look how to organize our users into groups and what actually goes into acquiring a role through a
demonstration. We took a look at how to actually create a role and attach that role to an EC2 instance. And then we
took a look at the highle features of IM which allowed us to either grant access to another IM user
from another group, implement multiffactor authentication like we just did in the
demonstration or ensure ourselves that we are following uh all the compliancy standards that have been uh adhered to
by whatever project we're working on and that AWS is there to support So here we're going to talk about Amazon
ECS, a service that's used to manage Docker containers. So without any further ado, let's get started. In this
session, we would like to talk about some basics about AWS and then we're going to immediately dive into why
Amazon ECS and what is Amazon ECS in general and then it uses a service called Docker. So we're going to
understand what Docker is and there are competitive services available for ECS. I mean you could ECS is not the own and
only service to manage Docker containers. But why ECS advantage of ECS? We will talk about that and the
architecture of ECS. So how it functions what are the components present in it and u uh what are the functions that it
does? I mean each and every component what are all the functions that it does all those things will be discussed in
the architecture of Amazon ECS and how it works how it all connects together that's something we will discuss and
what are the companies that are using ECS what were the challenge and how ECS helped to fix the challenge that's
something we will discuss and finally we have a wonderful lab that talks about how to deploy docker containers on an
Amazon ECS. So let's talk about what is um AWS. Amazon web service in short called as AWS is an web service in the
cloud that provides a variety of services such as compute power, database storage, content delivery and lot of
other resources. So you can scale your business and grow not focus more on your IT needs and the rest of the IT demands
rather you can focus on your business and let Amazon scale your IT or let Amazon take care of your IT. So what is
that you can do with AWS? With AWS we can create deploy any application in the cloud. So it's not just deploying you
can also create your application in the cloud. It has all the tools and services required. The tools and services that
you would have installed in your laptop or you would have installed in your on premises desktop machine for your
development environment. You know the same thing can be uh installed and used from the cloud. So you can use cloud for
creating and not only that you can use the same cloud for deploying and making your application available for your end
user. The end user could be internal internal users. The end user could be the could be in the internet. The end
user could be kind of spread all around the world. It doesn't matter. So it can be used uh to create and deploy your
applications in the cloud. And like you might have guessed now it provides service over the internet. That's how
your users worldwide would be able to use the service that you create and deploy, right? So it provides service uh
over the internet. So that's for the end customer. And how will you access those services? That's again through the
internet. It's like the extension of your data center in the internet. So it provides all the services in the
internet. It provides compute service through the internet. So in other words, you access them through the internet. It
provides database service through the internet over the internet. In other words, you can securely access your
database through the internet and lot more. And the best part is this is a pay as you go or pay only for what you use.
There is no long-term or you know beforehand commitment uh here. Most of the services does not have any
commitment. So there is no long-term and beforehand commitment. You only pay exactly for what you use. There's no
overage. There's no overpaying, right? There's no buying in advance, right? You only pay for what you use. Let's talk
about what ECS is. So before ECS, before containers, right? ECS is a service that manages Docker containers, right? It's
not a product or um it's not a feature all by itself. It's a service that's dependent on Docker container. So before
Docker containers, all the applications were running on WM or on an host or on an physical machine, right? And that's
memory bound, that's latency bound. the server might have issues on and on. Right? So let's say this is Alice and
she's trying to access her application which is running somewhere in her on premises and the application isn't
working. What could be the reason? Some of the reasons could be memory full. The server is currently down at the moment.
We don't have another physical server to launch the application. Lot of other reasons. So a lot of reasons why the
application wouldn't be working in on premises. Some of them are memory full issue and server down issue. Very less
high availability or in fact single point of failure and no high availability if I if I need to tell it
correctly with ECS the services can kind of breathe free right the services can run seamlessly. Now how how is that
possible those thing we will discuss uh in the upcoming sessions. So because of containers and ECS managing containers,
the applications can run in a high available mode. They can run in an high available mode. Meaning if something
goes wrong, right, there's another container that gets spuned up and u your application runs in that particular
container. Very less chances of your application going down. That's what I mean. This is not possible with a
physical host. This is very less possible with an VM or at least it's going to take some time for another VM
to get spun up. So why ECS or what is ECS? Amazon ECS maintains the availability of the application and
allows every user to scale containers when necessary. So it not only meets the availability of the application meaning
one container running your application or one container hosting your application should be running all the
time. So to meet that high availability availability is making sure your service is running 24 bar 7. So container makes
sure that your services run 24 bar 7. Not only that, not only that, suddenly there is an increase in demand. How how
do you meet that demand, right? Let's say you have like thousand users. Suddenly the next week there are like
2,000 users. All right? So how do you meet that demand? Container makes it very easy for you to meet that demand.
In case of VM or in case of physical host, you literally will have to go buy another physical host or uh you know add
more RAM, add more memory, add more CPU power to it. All right, or kind of club two three uh hosts together clustering.
You would be doing a lot of other things to meet that high availability and also to meet that demand. But uh in case of
uh ECS, it automatically scales the number of containers. It automatically scales the number of containers needed
and it meets your demand for that particular R. So what is Amazon ECS? The full form of uh ECS is elastic container
service. Right? So it's basically a container management service which can quickly launch and uh exit and manage
docker containers on a cluster. So what's the function of ECS? it it helps us to quickly launch and quickly exit
and manage docker container. So it's kind of a management service for the docker containers you will be running in
amazon or running in the AWS environment. So in addition to that it helps to uh schedule the placement of
container across your cluster. So it's like this you have two physical hosts you know joined together as a cluster
and ECS helps us to place your containers. Now where should your container be placed? Should it be placed
in host one? Should it be placed in host two? So that logic is defined in ECS. We can define it. You can also let ECS uh
take control and define that logic. Most cases you will be uh defining it. So schedule the placement of containers
across your cluster. Let's say two containers want to interact heavily. You really don't want to place them in two
different host. All right, you would want to place them in one single host so they can interact with each other. So
that logic is defined by us. And these container services you can launch containers using AWS management console
and also you can launch containers using SDK kids available from Amazon. You can launch through a Java program. You can
launch container using an net program. You can launch container using an NodeJS program as in when the situation
demands. So there are multiple ways you can launch containers through management console and also programmatically. And
ECS also helps to migrate application to the cloud without changing the code. So anytime you think of migration, the
first thing that comes to your mind is that how will that environment be? Based on that, I'll have to alter my code.
What's what's the IP? What is the storage that's being used? What what are the different parameters? I'll have to
include the environment parameters of the new environment with containers. Now, that worry is already taken away
because we can create an exact environment. The the one that you had on premises, the same environment gets
created in the cloud. So, no worries about changing the application parameter. No worries about changing the
code in the application, right? You can be like, if it ran in my laptop, a container that I was running in my
laptop, it's definitely going to run in the cloud as well because I'm going to use the same container in the laptop and
also in the cloud. In fact, you're going to ship it. You're going to move the container from your laptop to Amazon ECS
and make it run there. So it's like the same the very same image the very same container that was running in your
laptop will be running in the uh cloud or production environment. So what is docker? We know that it ECS helps to
quickly launch, exit and manage docker containers. What is docker? Let's let's answer that question. What is docker?
Now, Docker is a tool that helps to automate the development of an application as a lightweight container
so that the application can work efficiently in different environments. This is pretty much what we discussed
right before the slide. I can build an application in my laptop or in on premises in a container environment,
Docker container environment. Then anytime I want to migrate, right, I don't have to kind of rewrite the code
and then rerun the code in that new environment. I can simply create an image, a docker image and move that
image to that production or the new cloud environment and simply launch it there. Right? So no compiling again, no
relaunching the application. simply pack all your code in a docker container image and ship it to the new environment
and launch the container there. That's all. So docker container is a lightweight package of software that
contains all the dependencies. So because you know when packing you'll be packing all the dependencies, you'll be
packing the code, you'll be packing the framework, you'll be packing the libraries that are required to run the
application. So in the new environment, you can be pretty sure, you can be guaranteed that it's going to run
because it's the very same code, it's the very same framework, it's the very same libraries that you have shipped,
right? There's nothing new in that new environment. It's the very same thing that's going to run in that container.
So you can be rest assured that they are going to run in that new environment. And these Docker containers are highly
scalable and they are very efficient. Suddenly you wanted like 20 more docker containers to run the application. Think
of adding 20 more hosts, 20 more VMs, right? How much time would it take? And compared to that time, the amount of
time that Docker containers would require to kind of scale to that amount like 20 more containers, it's very less
or it's minimal or negligible. So it's an highly scalable and it's a very efficient uh service. you can suddenly
scale number of Docker containers to meet any additional demand. Very short boot up time because it takes it's not
going to load the whole operating system. And these docker containers you know they use the uh Linux kernel and
features of the kernel like cgroups and namespaces to kind of segregate the processes. So they can run independently
any environment and it takes very less time uh to boot up and the datas that are stored in the containers are kind of
reusable. So you can have an external uh data volume and I can map it to the container and whatever the uh space
that's occupied by the container and the datas that the container puts in that volume. They are kind of reusable. You
can simply remap it to another application. You can kind of remap it to the next successive uh container. You
can kind of remap it to the next version of the container, next version of the application you'll be launching. And you
don't have to go through building the data again from the scratch. Whatever data the container was using previously
or the previous container was using that data is available for the next container as well. So the volumes that the
containers uses are very uh reusable volumes and like I said it's it'solated application. So it kind of isolates by
its nature it kind of by the way it's designed by the way it is created it isolates one container from another
container. Meaning anytime you run applications on different containers, you can be rest assured that they are
very much isolated. Though they are running on the same host, though they are running on the same laptop, let's
say, though they are running on the same physical machine, let's say running 10 containers, 10 different applications,
you can be sure that they are well disconnected or well isolated applications. Now let's talk about the
advantages of ECS. The advantage of ECS is improved security. It's security is inbuilt in ECS. With ECS, we have
something called as a container registry. You know that's where all your images are stored and those images are
accessed only through HTTPS. Not only that those images are actually encrypted and access to those images are allowed
and denied through identity and access management policies AM and in other words let's say two container running on
the same instance. Now one container can have access to S3 and the others or the rest of the others are denied access to
S3. So that kind of granular security can be achieved through containers when we mix and match the other security
products available in Amazon like AM encryption accessing it uh using uh HTTPS. These containers are very
costefficient. Like I've already said, uh these are lightweight uh processes, right? We can schedule multiple
containers on the same node. And this actually allows us to achieve high density on an EC2 instance. Imagine an
EC2 instance that that's very less utilized. That's not possible with a container because you can actually dense
or crowd an EC2 instance with more container in it. So to best use those resources in EC2 straightforward you can
just launch one application but with when we use containers you can launch like 10 different applications on the
same EC2 server that means 10 different applications can actually feed on those resources available and can benefit the
application and ECS not only deploys the container it also maintains the state of the containers and it makes sure that
the minimum set of containers are always running based on the requirement. That's another cost uh efficient way of using
it, right? And anytime an application fails and that has a direct impact on the revenue of the company and easests
make sure that you're not losing any revenue because your application has failed and
easensible services. It's like this in many organization there are majority of unplanned work because of environment
variation. A lot of firefighting happens when we kind of deploy the code from one or kind of move the code or redeploy the
code uh in a new environment. A lot of firefighting happens there, right? This Docker containers are pretty extensible.
Like we discussed already, environment is not a concern for containers because it's going to kind of shut itself inside
a docker container and anywhere the docker container can run the application will run exactly the way it performed in
the past. So environment is not a concern for the docker containers. In addition to that, ECS is um easily
scalable. We have discussed this already and it improves it has improved compatibility. We have discussed this
already. Let's talk about the architecture of ECS. Like you know now the architecture of ECS is the ECS
cluster itself. That's group of servers running the ECS service and it integrates with Docker right. So we have
a Docker registry. Docker registry is a repository where we store all the docker images or the container images. So it's
like three components. ECS is of three components. One is the ECS cluster itself. Right? When I say ECS itself,
I'm referring to ES cluster cluster of servers that will run the containers and then the repository where the images
will be stored. Right? the repository where the images will be stored and the image itself. So container image is the
template of instructions which is used to create a container right so it's like what's the OS what is the version of
node that should be running and any additional software do we need so those question gets answered here so it's the
template template of instructions which is used to create the containers and then the registry is the service where
the docker images are stored and shared so many people can store there and many people can access or if there's another
group that wants to access they can access the image from there or one person can store the image and rest of
the team can access and the rest of the team can store image and this one person can pick the image from there and kind
of ship it to the uh customer or ship it to the production environment all that's possible in this container registry and
Amazon's version of the container registry is ECR and there's a third party Docker itself has a container
registry That's Docker Hub ECS itself which is the the group of servers that runs those containers. So these two the
container image and the container registry they kind of handle Docker in an image format just an image format.
And in ECS is where the container gets live and then it becomes an compute resource and starts to handle request
starts to serve the page and starts to do the batch job you know whatever your plan is with that container. So the
cluster of servers ECS integrates well with the familiar services like VPC. VPC is known for securing. VPC is known for
isolating the uh whole environment from rest of the customers or isolating the whole environment or the whole
infrastructure from the rest of the clients in your account or from the rest of the applications in your account on
and on. So VPC is a service that provides or gives you the network isolation. ECS integrates well with VPC
and this VPC enables us to launch AWS resources such as Amazon EC2 instance in a virtual private network that we
specified. This is basically what we just discussed. Now let's take a closer look at uh the ECS. How does ECS work?
Let's find answer for this question. How does ECS work? ECS has got a couple of components within itself. So these ECS
servers can run across availability zone. As you can see there are two availability zones here. They can
actually run across availability zones and ECS has got two modes fargate mode and the EC2 mode. Right here we're
seeing Fargate mode and then here we're seeing nothing. That means it's an EC2 uh mode. And then it has got different
network interfaces attached to it because they need to be running in an isolated fashion right so anytime you
want network isolation you need separate IP and if you want separate IP you need separate network interface card and
that's what you have elastic network interface card separate elastic network interface card for all those tasks and
services and this runs within an VPC let's talk about the fargate service tasks are launched using the Fargate uh
service. So we will discuss about uh task. What is Fargate? Now Fargate is a compute engine in ECS that allows users
to launch containers without having to monitor the cluster. ECS is a service that manages the containers for you,
right? Otherwise, managing containers will be an full-time job. So ECS manages it for you. And if you and you get to
manage ECS, that's the basic service. But if you want Amazon to manage ECS and the containers for you, we can go for
Fargate. So Fargate is a compute engine in ECS that allows users to launch containers without having to monitor the
ECS cluster. And the tasks, the task that we discussed, the tasks has two components. You see task right here. So
they have two components. We have um ECS container instance and then the container agent. So like you might have
guessed right now ECS container instance is actually an EC2 instance right capable of running containers. Not all
EC2 instances can run containers. So these are like specific EC2 instances that can run containers. They are ECS
container instances. And then we have container agent which is the agent that actually binds those clusters together
and it does lot of other housekeeping work right kind of connects clusters uh makes sure that uh the uh version needed
is present so it's all part of that agent or it's all job of that agent container instances container instances
is part of Amazon EC2 instance which run Amazon ECS container agent pretty straightforward definition and then a
container agent is responsible for communication between ECS and the instance and it also provides the status
of the running containers kind of monitors the container monitors the state of the container make sure that
the container is up and running and if there's anything wrong it kind of reports it to the appropriate service to
fix the container on and on right it's a container agent when we don't manage container agent it it runs by itself And
you really don't have to do anything to make the container agent better. It's it's already better. You really won't be
configuring anything in the agent. And then elastic network interface card is an virtual interface network that can be
connected uh to an instance in VPC. So in other words, elastic network interface is how the container interacts
with another container and that's how the container interacts with the EC2 host and that's how the container
interacts with the internet external world. And a cluster a cluster is a set of ECS container instances. It's not
something that's very difficult to understand. It's simply a group of EC2 instances that runs that ECS agent. And
this cluster it cluster handles the process of uh scheduling monitoring and scaling the requests. We know that ECS
can scale the containers can scale. How does it scale? That's all monitor and managed by this ECS cluster. Let's talk
about the companies that are using Amazon ECS. There are variety of companies that use um ECS clusters. Just
to name a few, OakA users easiest cluster and Octa is a product that use identity information to grant people
access to applications on multiple devices at any uh given point of time. They make sure that they have a very
strong security protection. So OCTA uses Amazon ECS to run their Octa application and serve their customers. and Abbya.
Abbya is an uh TV channel and they chose to use microservices and docker containers. They already had
microservices and docker containers. And when they thought about a service that they can use in AWS, ECS was the only
service that they can immediately adapt to. And because in Abby TV the engineers have already been using Docker and
Docker containers, it was kind of easy for them to adapt themselves to ECS and start using it along with the benefits
that ECS provides. Previously they had to do a lot of work but now ECS does it for them. All right. Similarly, Remine
and Ubisoft, GoPro are some of the famous companies that use Amazon ECS and get benefited from its scalability, get
benefited from its cost, gets benefited from its Amazon managed services, get benefited from the portability that ECS
and the migration option that ECS provides. Let's talk about how to deploy a Docker container on Amazon ECS. The
way to deploy Docker container on ECS is first we need to have an AWS account and then set up and run our first ECS
cluster. So in our lab we're going to use um the launch wizard to run an ECS cluster and run containers in them. And
then task definition. Task definition tells the size of the container, the number of the container. And when we
talk about size, it tells how much of CPU do you need, how much of memory do you need? And talking about numbers, you
know, it it requires how many numbers of container you're going to launch. You know, is it five, is it 10 or is it just
one running all the time. Now, those kind of information goes in the task definition file. And then we can do some
advanced configuration on ECAS like a load balancers and you know what port number you want to allow and you don't
want to allow you know who gets access who shouldn't get access and what's the IP that you want to allow and deny
requests from on and on and uh this is where we would also mention the name of the container so to differentiate one
container from the other and the name of the uh service you know is it an backup job is it a web application is it an
data container it's is it going to take care of your data uh data back end and the desired number of task that you want
to be running all the time those details go in when we try to configure the ECS service right and then you configure
cluster you put in all the uh security in the configure your cluster step or configure cluster stage and finally we
will have an instance and bunch of containers running in that instance all All right, let's do a demo. So here I
have logged in to my Amazon portal and let me switch to the appropriate region. I'm going to pick North Virginia. North
Virginia. Look for ECS. And it tells ECS is a service that helps to run and manage Docker containers. Well and good.
Click on it. I'm in North Virginia. Just want to make sure that I'm in the right region. And go to clusters. And here we
can create cluster. This is our Fargate and this is our EC2 type launching for Linux and Windows environment. But I'm
going to launch through this uh walk through portal. Right, this gives a lot of information here. So the different
steps involved here is creating a container definition which is what we're going to do right now and then a task
definition and then a service and finally the cluster. It's a four-step process. So in container definition we
define the image the base image we are going to use. Now here I'm going to launch an uh HTTPD or a simple HTTP web
page right. So a simple HTTPD 2.4 image is fair enough for me and uh it's not an heavy application. So 0.5 GB of memory
is enough. And again it's not a heavy application. So 0.25 125 virtual CPU is enough in our case, right? You can edit
it based on the requirement. You can always edit it. And because I'm using HDPD, the port mapping is already port
80. That's how the container is going to receive the request. And there's no health check as of now. When we want to
design critical and complicated environments, uh we can include health check, right? And this is the CPU that
we have chosen. You can edit it and I'm going to use some bash commands to create an HTML page. Right? This page
says that you know Amazon ECS sample app right and then it says Amazon ECS sample app your application is running on a
container in Amazon ECS. So that's the page the HTML page that I'm going to create right index.html. So I'm going to
create and put it in an appropriate location so those pages can be served from the container. Right? If you
replace this with any of your own content, then it's going to be your own content. ECS comes with some basic logs
and these are the places where they get stored. That's not the focus as of now. All right. So, I was just saying that
you can edit it and customize it to your needs. We're not going to do any customization now. We're just getting
familiar with ECS now. And the task definition name of the task definition is u first run task definition. And then
uh we are running it in a VPC. And then this is an fargate mode meaning the uh servers are completely handled by Amazon
and the task memory is.5 GB and the task CPU is.25 virtual CPU. Name of the service is it a batch job? Is it an you
know a front end? Is it an back end? Or is it a simple copy job? What's the service? Name of the service goes here.
Again this you can edit it. And here's a security group. As of now, I'm allowing 480 to the whole world. If I want to
restrict to a certain IP, I can do that. Uh the default option for load balancing is uh no load balancer. But I can also
choose to have a load balancer and use port 80 to map that port 80 to the container port 80. Right? I can do that.
The default is no load balancer. All right, let's do one thing. Let's use load balancer. Let's use load balancer.
and port 80 that receives information on port 80. HTTP what's going to be the cluster name? We're in the last step.
What is the cluster name? Cluster name can be simply learn ECS demo. Next, we're done and we can create. So, it's
launching a cluster as you can see and it's picking the task definition file that we've created and it's uh using
that to launch an service and then these are the log groups that we discussed and it's creating a VPC. Remember ECS clubs
well with the VPC. It's creating a VPC and it's creating two subnets here for high availability. It's creating that
security group port 80 allowed to the whole world and then it's putting it behind and load balancer. It generally
would take like 5 to 10 minutes. So we just need to be patient and let it complete its creation. And once this is
complete, we can simply access the service using the load balancer URL. And when this is running, let me actually
take you to the other products or the other services that are integrated with the ECS. It's getting created. Our
service is getting created as of now. ECR repository. This is where all our images are stored. Now, as of now, I'm
not pulling my image from ECR. I'm pulling it directly from the internet. uh docker docker hub but all custom
images all custom images they are stored in this repository so you can create a repository call it app one create a
repository so here's my repository so any image that I create locally any docker image that I create locally I can
actually push them push those images using these commands right here and they get stored here and I can make my ECS
connect with ECR and pull images from here. So they would be my custom images. And as of now, because I'm using a
default image, it's uh directly pulling it from the internet. Let's go to EC2 and look for a load balancer because we
wanted to access the application from behind a load balancer, right? So here is a load balancer created for us. And
anytime I put the URL, so cluster is now created. You see there's one service running, right? Let's click on that
cluster. Here is the name of our application. And here is the uh tasks, the different containers that uh we are
running. And if you click on it, we have an IP, right? IP of that container. And it says it's running. It was created at
such and such time and started at such and such time. And this is the task definition file that it this container
uses meaning the template the details the all the version details they all come from here and it belongs to the
cluster called uh simply learn ECS demo right and you can also get some logs container logs from here so let's go
back and there are no ECS instances here because remember this is fargget you're not managing any ECS instance right so
that's why you're not seeing any ECS instance here so Let's go back to tasks and u go back to the same page where we
found the IP pick that IP put it in the browser and you have this sample HTML page running from an container. So let
me go back to load balancer EC2 and then under EC2 I'll be able to find a load balancer find that load balancer pick
that DNS name put it in the browser and now it's accessible through the load balancer URL. Right. Now this URL can be
mapped to other services like DNS. This URL can be embedded in any of your application if you want to make that
application connect with this container. Now using IP is not all that advisable because uh these containers can die and
then a new container gets created and when a new container gets created it gets a new IP right. So a hardcoding IP
is uh not hardcoding dynamic IPs are not advisable. So you would be using load balancer and putting that URL in that
application that you want to make it interact with this container instance. It was a wonderful experience in walking
you through this ECS topic. And in here we learn about what AWS is and why we using ECS and what is ECS in general.
what is Docker in specific and we also learn about the advantages of ECS, the architecture, the different components
of ECS and how ECS works when they're all connected together and we also looked at the companies that use ECS and
their use cases and finally a lab how we can launch ECS Fargate through the portal. I'm very glad to walk you
through this lesson about route 53. So in this section we are going to talk about basics of uh AWS and then we're
going to immediately dive into why why we need Amazon Route 53 and then we're going to expand and talk about the
details of Amazon Route 53 the benefits it provides over its competitors and the different types of routing policy it has
and some of Amazon Route 53's key features and we're going to talk about how to access Route 53. I mean the
different ways, the different methods you can access Route 53. And finally, we're going to end with an a wonderful
demo in Route 53. So let's talk about what is AWS. Amazon Web Services or AWS in short is a cloud provider that offers
a variety of services such as variety of IT services or infrastructure services such as compute power database content
delivery and other resources that helps us to scale and grow our business and AWS is hot. AWS is picking up AWS is
being adapted by a lot of customers. That's because AWS is easy to use even for a beginner. And talking about
safety, the the AWS infrastructure is designed to keep the data safe irrespective of the size of the data. Be
it small data, be it very minimal data, be it all the data that you have in terabytes and in pabytes, Amazon can
keep it safe in their environment. And the wonderful thing and the most important reason why a lot of customers
move into the cloud is that the pay as you go pricing there is no long-term commitment and it's very cost effective.
What this means is that you're not paying for resource that you're not using in on premises. You do pay for
resources you're not using a lot. Meaning you go and buy a server, you do the estimate for the next 5 years and
only after like 3 or 4 years you'll be hitting the peak capacity but still you would be buying that capacity before 4
years right and then you will gradually be you know utilizing it from you know 40%age 60%age 70 80 and then 100. So
what you have done is that even though you're not using the full capacity, you still have bought it and are and are
paying for it from day one. But in the cloud, it's not like that. You only pay for the resources that you use. Anytime
you want more, you scale up the resource and you you pay for the scaled up resource. And anytime you want less, you
scale down the resource and you pay less for that scale down resource. Let's talk about why Amazon uh route 53. Let's take
this scenario where uh Rachel is trying to open her web browser and uh the URL that she hit isn't working. A lot of
reasons behind why the URL isn't working. It could be the server utilization that went high. It could be
it could be the uh memory usage that went high. A lot of reasons. And she starts to think is there an efficient
way to scale resources according to the user requirements or is there an efficient way to kind of mask all those
failures and kind of divert the traffic to the appropriate active you know active resource or active uh service
that's running our application. You always want to hide the failures, right? In it, kind of mask the failure and
direct the customer to another healthy uh service that's running, right? None of your customers would want to see a
server not available or, you know, none of the customers, your customers would want to see your service not working.
Not impressive to them. And this is Tom. Tom is an IT guy and he comes up with an idea and he's answering Rachel, yes, we
can scale resources efficiently using Amazon route 53. In a sense, he's saying that yes, we can mask the failure and we
can keep the services up and running, meaning we can provide more high availability to our customers with the
use of Route 53. And then he goes on and explains Amazon route 53 is a DNS service that gives developers an
efficient way to connect users to internet applications without any downtime. Now downtime is the key.
Amazon route 53 helps us to avoid any downtime that customers will experience. You still will have downtime in your
server in your application but your customers will not be made aware of it. And then Rachel is kind of interested
and she's like yeah that sounds interesting. I want to learn more about it. And Tom goes on and explains the
important concepts of Amazon Route 53. That's everything that I'm going to explain it to you as well. All right.
So, what is Amazon Route 53? Amazon Route 53 is an highly scalable DNS or domain name system web service. This
service, this Amazon route 53, it functions three main things or it has three main functions. So the first thing
is if a website needs a name, route 53 registers the name for the website domain. Let's say you want to buy
google.com. You want to buy the domain name. Let's say you want to buy that domain name. You buy that through route
53. Secondly, route 53 is the service that actually connects your server which is running your application or which is
holding which is serving your web page. So that's the service that actually route 53 is the service that connects
the user to your server when they hit google.com in the browser or whatever domain name that you have purchased. So
you bought a domain name and the user types in your domain name.com and then rot 53 is a service that helps the user
to connect their browser to the application that's running in an EC2 instance or any other server that you
are using to serve that content. And not only that, RO 53 checks health of the resource by sending automated requests
over the internet to a resource. So that's how it identifies if there is any resource that has failed. When I say
resource, I'm referring to any infrastructure failure, any application level failure. So it kind of keeps
checking. So it understands it first before the customer notices it. And then it it does the magic kind of shifts the
connection from one server to the other server. We call it routing. We will talk about that as we progress. So the
benefits of using route 53 it's highly scalable meaning suddenly let's say the number of requests the number of people
trying to access your website through that domain name that you have bought let's say it has increased rout 53 is
highly scalable right it can handle even millions and millions of requests because it's highly scalable and it's
managed by Amazon same thing it's reliable it's highly scalable it can handle large queries without the users
without you interacting without the user who bought it interact with it. You don't have to scale up, you know, when
you're expecting more requests. It automatically uh scales and it is very reliable in a sense that uh it's very
consistent. It has the ability to route the users to the appropriate application through the logic that it has. It's very
easy to use. Uh when we do the lab, you're going to see that it's very easy easy to use. You buy the domain name and
then you simply map it to the application. and you simply map it to the server by putting in the IP or if
you you can simply map it to another load balancer by putting in the load balancer URL. You can simply map it to
another uh S3 bucket by simply putting the S3 bucket name or the S3 bucket URL. It's pretty straightforward, easy to set
up and it's very cost effective in a way that we only pay for the service that we have uh used. So no wastage of uh money
here. So the billing is set up in such a way that you are paying only for the amount of requests that uh you have
received right the amount of traffic the amount of requests that you have received and couple of other things the
the number of u uh hosted zones that uh you have created right and a couple of other things it's very cost effective in
such a way that you only pay for the service that you are using and it's secure in a way that access to route 53
is uh integrated ated with the identity and access management am so you only have authorized users gain access to
route 53 the trainee who just joined yesterday won't get access and the contractor or the consultant the third
party consultant you have given access or who is using your environment you can block access to that uh particular
person because he's not the admin or he's not a privileged user in your account so only privileged users and
admin gain access to route 53 3 through AM. Now let's talk about the routing policies. So when you create a record in
uh in route 53, record is nothing but an entry. So when you do that, you choose a routing policy, right? Routing policy is
nothing but it determines how Route 53 responds to your queries, how the DNS queries are being responded, right?
That's that's a record or that's a routing policy. So the first one is simple routing policy. So we use simple
routing policy for a single resource. In other words, simple routing allows to configure DNS with no special route 53
routing. It's kind of one to one. You use an single resource that performs a given function to your domain. For
example, if you want to simply map an URL to a web server, that's pretty straightforward simple routing. So it
routes traffic to a single resource example web server to a website and with simple routing multiple records with the
same name cannot be created but multiple values can be created in the same record. The second type of routing
policy is uh failover routing. So we would be using failover routing when we want to configure active passive
failover. If something failed, right, you want to fail over to the next resource which was previously the backup
resource, now the active resource or which was previously the backup server, now it's an active server. So you would
be failing over to that particular resource or that particular IP. If you want to do that, we use failover
routing. So failover routing routes traffic to a resource when the resource is healthy or to a different resource
when the previous resource is unhealthy. In other words, anytime a resource goes unhealthy, I mean, it does all that's
needed to shift the traffic from the primary resource to the secondary resource. In other words, from the
unhealthy resource to the healthy resource. And this records can route traffic to anything from an uh Amazon S3
bucket or you can also configure a complex tree of uh records. Now, when we configure the records, it'll be more
clear to you. So as of now just understand that route 53 can route or this routing policy the failover routing
policy can route traffic to Amazon S3 bucket or uh to a website that has complex tree of uh records. Geoloccation
routing policy. Now geoloccation routing just like the name says it takes that routing decision based on the geographic
location of the user. Right? In other words, you know, when you want to route traffic based on the location of the
user, that's your primary criteria for, you know, sending that request to the appropriate server, we would be using
geo location routing. So, it localizes the content and presents a part or the entire website in the language of the
user. For example, a user from US, you would want to direct them to an English website. And a user from German, if you
want to send them to the German website and a user from France, you know, you want to send those requests or you want
to show content specific to a customer who lives in France, a French website. So this is if that's your condition,
this is the routing policy we would be using. And the geographic locations are specified by either continent or by
country or by state in the United States. So only in the United States you can actually split it to state level.
And for the rest of the countries you can do it on a country level. On an high level you can also do it on a continent
level. The next type of routing policy would be geoproximity routing. Geoproximity routing policy when we want
to route traffic based on the location of our resource and optimally shift traffic from resources in one location
to resource in another location. We would be using geoproximity routing. So geoproximity routing routes traffic to
the resources based on the geographic location of the user and the resources they want to access. And it also has an
option to route more traffic or less to a given resource by specifying a value known as bias kind of weight but we also
have weighted routing that's different. So we've chosen different name bias. You can send more traffic to a particular
resource by having a bias on that particular routing condition. And a bias expands or shrinks the size of the
geographic region from which traffic is routed to a resource. And then we have latency based routing. Just like the
name says, we use latency based routing if we have resources in multiple AWS regions. And if you want to route
traffic to the region that provides the best latency at any given point of time. So let's say if one single website needs
to be installed and hosted on multiple AWS regions then latency routing policy is what is being used. It improves the
performance of the users by serving the request from the AWS region that provides the lowest latency. So at any
given point if performance is your uh criteria and at any given point of time irrespective of what happens in Amazon
infrastructure irrespective of what happens in the internet if you want to route your users to the best performing
website best performing region then we would be using latency based routing and for using latency based routing we
should create latency records for the resources in multiple AWS regions and then the other type of routing policy is
a multialue routing policy where we can make route 53 to respond to DNS queries with up to eight healthy records
selected at random. So you're not kind of loading one particular server. We can define eight records and on a random
basis route 53 will respond to queries from these eight records. So it's not one server that gets all the requests
but uh eight servers gets the request in a random fashion. So it's multialue routing policy and what we get by this
is that uh we are distributing the traffic to many servers instead of just one server. So multialue routing
configures uh route 53 to return multiple values in response to a single or multiple DNS queries. It also checks
the health of all resources and returns the multiple values only for the healthy resources. Let's say out of the eight
servers we have defined one server is not doing healthy. It will not respond to the query with the details of the
unhealthy server. Right? So now it's going to treat it as only seven servers in the list because one server is
unhealthy. and it has the ability to return multiple health checkable IP addresses to improve availability and
load balancing. The other type of uh routing policy is weighted routing policy and in here we use u to route
traffic or this is used to route traffic to multiple resources in a proportion that we specify. So this is an weighted
routing and weighted routing routes multiple resources to a single domain name or a subdomain and control the
traffic that's routed to each resources. So this is very useful when you're doing uh load balancing and testing new
versions of the software. So when you have a new version of the software, you really don't want to send 100% of the
traffic to it. So you want to get customers feedback about the new software that you have launched, new
version or new application that you have launched. So you would kind of send only 20% of the traffic to that application,
get customer feedback and if all is good, then we would move the rest of the traffic to that new application. So any
software launches, application launches will be using weighted routing. Now let's talk about the key benefits or key
features of uh Route 53. Some of the key features of Route 53 are traffic flow. It routes end users to the endpoint that
should provide the best user experience. That's what we discussed in the routing policies, right? It uses a routing
policy, a latency based routing policy and geobased routing policy and then failover routing policy. So it kind of
improves the uh user experience. And the key feature, the other key feature of route 53 is we can buy domain names
using route 53. Using route 53 console, we can buy it from here and use it in route 53. Previously, it was not the
case. But now we can buy it directly from Amazon through Route 53 and we can assign it to any resources that we want.
So anybody browsing that URL, the connection will be directed to the server in AWS that runs our website. A
health checks, it uh monitors health and performance of the application. So it comes with an health check attached to
it. Health check are useful to make sure that the unhealthy resources are retired, right? the unhealthy resources
are taken away or your customers are not kind of hitting the unhealthy resources and they see an service down page or
something like that. Uh we can have weighted roundroin load balancing that's helpful in spreading traffic between
several services or servers via roundroin algorithm. So no one server is fully hit or no one server kind of fully
absorbs all the traffic. You know you can shift you can split and shift the traffic to different servers based on
the weight that you would be configuring and also weighted uh routing also helps with soft launch soft launch of your new
application or the new version of your website. There are different ways we can access Amazon Route 53. So you can
access Amazon Route 53 through AWS console. You can also access Amazon route 53 using AWS SDKs and we can
access it using we can configure it using the APIs and we can also do it through the command line interface
that's Linux type Linux flavor AWS command line interface we can also do that using Windows command line Windows
PowerShell flavored command line interface as well. Now let's look at some of the companies that are using
route 53. So some of the famous companies that use uh route 53 are uh medium. Medium is an online publishing
platform and u it's more like a social journalism. It's kind of having hybrid collection of uh uh professionals people
and publications or exclusive blogs or publishers on medium. It's kind of an blog website and that uses rot 53 for
the DNS service. Reddit is an uh social news aggregation or web content rating and discussion website that uses RO 53.
So these are some websites that that are accessed throughout the world and they are using RO 53 and it's highly
scalable. Suddenly there is a a new news right their website will be accessed a lot and uh they need to keep their
service up and running all the time more availability otherwise customers will end up in an broken page and uh the
number of customers who will be using the website will come down so it's very critical these sites these companies are
very critical you know they being highly available their page their site being highly available and the internet is
very critical and crucial for them and they rely and use Route 53 to meet that particular demand. And Airbnb is uh
another company. Uh Instacart, Kosar is another company. Stripe is another company that uses Route 53 to as their
DNS uh provider for their DNS uh service. They use Route 53 so their customers get best performance. They use
Route 53 so their website is highly available. They use route 53 to kind of shift the traffic between the resources.
So their resources are properly used with all the weighted routing. The resources are properly used. Now let's
quickly look at a demo. I'm in my AWS console and I'm in RO 53. So let me click on route 53. So in this lab, we're
actually going to simulate buying a domain name and then we're going to create an S3 static website and we're
going to map that website to this DNS name. Right? So the procedure is the same for mapping load balancer. The
procedure is the same for mapping CloudFront. The procedure is the same for mapping EC2 instances as well. We're
picking S3 for simplicity, right? But our focus is actually on route 53. So let's go in here and we'll see if we can
we'll buy a domain name here. So let's first check the availability of a domain name called simply learn demo route 53.
Let's check its availability. It is available for $12. So let me add it to cart and then come back here. And then
once you continue it'll ask for personal information. Once you give the personal information you finally check out and
then it gets added to your shopping list. Once you pay for it, Amazon takes like 24 to 48 hours to make that DNS
name available. So the next stage would be contact details and then the third stage would be verify and purchase. So
once we have bought the domain name, it will become available in our DNS portal and I do have a domain name which I
bought some time back and it's now available for me to use. So I can go to hosted zone and simply start creating. I
can go to hosted zone and then here it's going to list all the domain names for me right click on the domain name and
then click on the record set and here I can actually map elastic load balancer S3 website VPC endpoint API gateway and
uh cloudfront elastic beanto domain names right all that gets mapped through this portal quite simple like four or
five step button clicks and then it'll be done so I have an domain name bot and then I'm going to go to S3 and I I'll
show you what I've done in S3. So I've created a bucket name called as DNS name. Let me clear the content in them.
So I've created a bucket and then permissions. I've turned off public access blocking and then I've created an
bucket policy. So this bucket is now publicly accessible. And then I went on to properties and created the static
website hosting. Right? And I've pointed that this is the file that's my index file that I'm going to put or name of
the file that's going to be my index file that I'm going to put in this S3 bucket. So I've put the index file html
saved it and uh we're going to create a file now. We're going to create an index file. So this is a sample code. It says
Amazon 53 getting started routing internet traffic to S3 bucket for your website and then a couple of other
information. So I've saved it as an index.html file in my desktop. So let me upload that from my desktop into this
bucket. So that's index.html and it's in capital I. So let me go to properties and go to static website hosting and
make sure that I spell it properly, right? It's case sensitive. And then save it. So now this means that my
website should be running through this URL. And it does. It's running through the static website URL. We're halfway
through. So now let me go back to route 53. Go back to route 53. Go back to hosted zones. Go into the domain name
and then create a record set and it's going to be an alias record and I I see my S3 static website endpoint there.
Right. So click on it and create. It has now created an record that's pointing my domain name to the S3 endpoint that I
have created and my static website is running from it. So let me test it. Right. So let me go to the browser put
the domain name in there and sure enough the domain name when my browser queried for the domain name a route 53 returned
a response saying this domain name is actually mapped to the S3 bucket static website hosting enabled S3 bucket and
this is the URL for that static website hosting and then my browser was able to connect to that S3 bucket and download
the details and show it in my browser right so it's that simple and pretty straightforward Today's session is on
AWS elastic beantock. So what's in it for you today? We'll be discussing about what is AWS, why we require AWS elastic
beantock, what is AWS elastic beantock, the advantages, disadvantages, the components of uh beantock along with
that the architecture and the companies that are primarily using the AWS beantock. So let's get started and uh
first understand what is AWS. AWS stands for Amazon Web Services. uh it's a cloud provider and that offers a variety of
services such as compute power, database storage, content delivery and many other resources. So we know that AWS is the
largest uh cloud provider in the market and so many services are available in the AWS where you can apply the business
logics uh and create the solutions using the cloud platforms. Now why AWS elastic beantock? Now what happened earlier and
uh that whenever the developer used to create the software or the modules related to the softwares uh it has to be
joined together to create a big application. Now one developer creates a module that has to be shared with
another developer and uh if the developers are geographically separated then it has to be shared over a medium
probably an internet. So that is going to take some time. Uh it would be a difficult process and in return it uh
makes the application or a software development a lengthier process. The building of the software development a
lengier process. So there were challenges uh which the developers were facing earlier and uh to overcome that
uh we have the beantock as a service available in the AWS. So why AWS elastic beantock is required uh AWS elastic
beantock has made the life of the developers quite easy. uh in terms of that they can share the applications
across different devices at a shorter time duration. Now let's understand what is AWS elastic beanto. Uh AWS elastic
beanto is a service uh which is used to deploy and scale web applications by developers. Not only web application any
application that is being developed by the developers. This is a simple representation of the AWS elastic
beanto. Now along with that the AWS elastic beanto supports the programming language the runtime environments that
are Java.net PHP NodeJS Python Ruby Go and Docker and in case if you're looking for any other programming language or a
runtime environment then you can make a request with AWS to arrange that for you. Now what are the advantages
associated with the elastic beanto? First advantage is that it's a highly scalable service. Now when we talk about
a scalability it means that whenever we require the resources in demand we can scale up the resources or we can scale
down the resources. So that is kind of a flexibility we get in terms of changing the type of resources whenever we need
it and in that case the elastic beanto is a highly scalable service. Now that is something which is very difficult to
achieve in case of an on-prim environments because you have to plan for the infrastructure and in case if
you're short off the resources within that infrastructure then you have to procure it. Again the second advantage
associated with the beanto is that it's a fast and simple to begin. Now when we say it's fast and simple that means that
you just have to focus on the development of an application building an application and then you can just
deploy the application directly using the beantock. What the beantock is going to do that every networking aspect is
being taken care by the beantock. It deploys your application in the back end on the servers and then you can directly
access your application using the URL or through the IP address. The third advantage is that it offers the quick
deployment that is what we discussed in the fast and simple to begin as well. So why it offers a quick deployment? You
don't have to bother about the networking concepts. You just have to focus on the application development and
then you can just upload your application, deploy that and then you are good to go. The other advantage is
that it supports multi-tenant architecture. When we talk about tenants or multi-tenants that means we can have
a virtual environments for separate organizations or the divisions within the organizations that will be virtually
isolated. So likewise you can have uh virtually isolated environments created on the beantock and they can be
separated used as a separate entities or a separate divisions within the organization and we know that it's a
flexible service since it's a scalable then uh it is a flexible also. Now coming to the simplifies operations as
an advantage. Now once uh the application is deployed using the beantock then it becomes very easy to
maintain and support that application using the beantock services itself. And the last advantage that we can have from
the beantock is that it's a costefficient service. The cost efficient as we know that many of the
AWS services are cost effective. The cost optimization can be better managed using the AWS maintock as compared to if
you are developing or if you're deploying any kind of an application or a solution on the on-prim servers. Now
there are some components that are associated with the AWS beanto and it has to be created in the form of a
sequence manner. So AWS elastic beanto consist of few important components which are required while developing an
application. Now what are these components? These are four components. One is application. The second is
application version. The third is environment and the fourth one is the environment the and we have to progress
while deploying our applications or the softwares using the same sequence. Now let's understand what are the different
components of the mean are the application it refers to a unique label which is used as a deployable code for a
web application. So generally you deploy your web application or you create your application and that is something which
is basically uh used as a unique label. Then the second component is application versions. So it resembles a folder which
stores a collection of components uh such as environments, versions and environment configurations. So all these
components are being stored using the application version. The third most important component is the environment.
In the environment only the current versions of the applications runs. Now remember that elastic means stock
supports multiple versions as well. and using the environment you can only run the current version of the application.
For if you wanted to have another version of an application to be running then you have to create an another
environment for that. Then comes the environment tier and in the environment tier it is basically it designates the
type of application that the environment runs on. Now generally there are two types of environment tier. One is the
web uh and the other one is the worker node and that's something which we'll be discussing later as well. Now let's
understand how does elastic beanto in AWS works. So first we have to create an application and this is a task that
would be done by the developers and for that you can actually select any runtime environment or a programming language
like Java, Docker, Ruby, GoP or Python as well and once you select that environment uh you can uh develop your
application using that runtime environments. Now after that uh once the application is created then you have to
upload the version of an application on the AWS and after that once the version is uploaded and then you have to launch
your environment. So just have to click on the buttons that's it nothing more uh you have to do once the environment is
launched then you can actually view that environment using a web URL or using the IP address. Now what happens in that
case is when you launch an environment uh in the back end the elastic beantock runs automatically runs any EC2 instance
and using a metadata the beanto deploys your application within that EC2 instance that is something which you can
look into the EC2 dashboard as well. So you don't have to take care of the security groups. You don't have to take
care of the IP addressing and even you don't have to login into the instance and deploy your application. It would be
done automatically by the beanto. It's just that you just have to monitor the environment and the statistics will be
available there itself in the beantock dashboard. Otherwise you can view those statistics in the uh cloudatch logs as
well. Now in case if you wanted to update any kind of a version then you just upload a new version and then just
deploy that and then monitor your environment. So these are the essentials to create a local applications for any
platform whether it's a NodeJS, Python etc. These are the things that you have to actually take care and this is the
sequence you have to follow while creating an environment. So you can say that it's a four steps uh creation of a
deployment of your application. That's it. Now after users upload their versions the configuration is
automatically deployed with a load balancer. Yes. And uh with the load balancer uh that means uh you can access
the applications using the load balancer DNS also. And apart from load balancer if you wanted to put any other feature
that includes the autoscaling. For example, if you wanted to create your EC2 instances where the application will
be deployed within the virtual private cloud or in a particular subnet within the VPC. all those features that are
available and you can select them using the beanto itself you don't have to move out to the VPC you don't have to
actually go to the EC2 dashboard and select all those separately everything would be available within the beanto
dashboard so that's what it says in the presentation that after creating an application the deploy service can be
specifically accessed using the URL so once the environment is created there will be a URL defined now you can put a
URL name also that is something which uh you wanted to code for your application. You can define that. You can check for
the availability of that URL and then you have to use that URL to access your application or the browser. Now once it
is done then in the monitor environment it says the environment is monitor provided capacity provisioning load
balancing autoscaling and h multi features all those features are available there itself in the beanto.
Now let's understand the architecture of AWS elastic beanto. Now there are two types of environments that you have to
select. You can select one is the web server environment and the other one is the worker environment. So based on the
client requirement, Beantock gives you two different types of environment that you have to select. Generally the web
server environment is the front end facing that means uh the client should be accessing this environment directly
using a URL. So mostly web applications are deployed using that environment. The worker environment is the backend
applications or the micro apps which are basically required to support the running of the web applications. Now it
depends on the client requirement what kind of an environment you wanted to select. Now in the web server
environment it only handles the HTTP request from the client. So that's why we use uh the web server environment
mostly for the web applications or any application which works on the HTTP HTTPS requests. So it's not only the
HTTP you can use the HTTPSs as well. The worker environment it process background task and minimizes the consumption of
resources. So again it is just like a kind of a micros service or an application services that are running in
the back end to support the web server environment. Now coming to the understanding of the AWS beanto. So this
is how the architecture of the AWS beanto is designed and you can refer to that image also. Now in the web server
environment let's say if we select a web server environment and it says that if the application receives client request
the Amazon root 53 sends these requests to the elastic load balancer. Now obviously we discussed here that the web
server environment is primarily an environment which receives the HTTP requests. It's a kind of a client-f
facing environment. Now if the application receives a client request Amazon from the Amazon root 53. This
root 53 is a service which is primarily used for DNS mapping. It's a global service and it may route you can route
the traffic from the root 53 matching your domains towards the load balancer and from the load balancer you can point
that traffic to the web server environment. Obviously the web server environment is nothing. It's just the
EC2 instances that would be running in the back end. Now here in the diagram you can see that there are two web
server environments and they are created in the autoscaling group. That means there is some kind of scaling options
that are defined as well and these instances are created in an availability zone or they can be created in a
different availability zone also for the redundancy as well. And these web application servers are further
connected to your databases which primarily will be in a different security groups. probably it can be an
RDS database also. So all these functionalities, all these features are basically available on the elastic
meantock dashboard itself. Now what happens in that cases if the application receives client requests, Amazon uh root
53 send these requests to the load balancer. Later the load balancer shares those requests among the EC2 instances.
How does that happen? It happens using a predefined algorithm. The equal distribution of a load is distributed to
both the EC2 instances or any number of EC2 instances running in the availability zone. Now in the
availability zones, every EC2 instance would have its own security group. They can have a common security group also.
They can have their own security group as well. Now after the security group, the load balancer is then connected to
the Amazon EC2 instance which are part of the autoscaling group. So that's something which we have discussed
already. Now this autoscaling group is would be defined from the beanto itself and there will be some scaling options
that will be created. It could be a possibility that it might be the minimum number of instances that would be
running as of now and based on the threshold defined it may increase the number of EC2 instance and the load
balancer will keep on distributing the load to as many instances that will be created inside the availability source.
Obviously there will be an internal hail check that the load balancer will be first doing before distributing the
real-time traffic to this instances created by the mean stock. Now what does autoscaling group does? It automatically
starts the additional EC2 instance to accommodate increasing load on your application. That's something which we
know that and also it monitors and scales instances based on the workload as well. So depends on what kind of a
scaling threshold you have defined in the autoscaling groups and when the load of an application decreases the EC2
instance will also be decreased. So whenever we talk about the autoscaling generally it comes in our mind is that
we scale up the resources that means we it increases the EC2 instances. In the autoscaling you might have the scale
down option also scale down policy also created in which if the load minimizes it can terminate the additional EC2
instances as well. So that is something which will be automatically managed. All these features can be achievable using
the elastic beanto and with this feature accommodated it gives you the better cost optimization in terms of managing
your resources. Now it says that elastic beanto has a default security group and the security group acts as a firewall
for the instances. Now here in this diagram it says about the security group autoscaling also. You might create it in
a default VPC also. You might create it in your custom VPC also where you can have the additional level of
securityurities also created. You can have the NACL knackles also defined here before the security groups. So that
would give you the additional filtering option or the firewall option. Now it says that with these groups with these
security groups it allows establishing security groups to the database server as well. So every database would also
have its own security group and the connection can be created between the web servers environment that is created
by the beantock to the database security groups as well. Now let's discuss about the worker environment. Now
understanding the worker environment what happens is that the client the web server environment is the client facing
the client sends a request for an access to the web server and in this diagram the web server further sends it to the
SQS which is a simple Q service and the Q service sends it to the worker environment and then whatever the worker
environment is created for doing some kind of a processing or some kind of an application that is running in the back
end that environment initiates and uh then send back uh the results to this SQS and vice versa. So let's understand
the architecture of a AWS elastic beanto with the worker environment. So when a worker environment here is launched, AWS
elastic beantock installs a server on every EC2 instance. So that is in the case of a web server environment also
and later the server passes the request to the simple Q service. Now this service is an asynchronous service.
Instead of a simple Q service, you can have other services also. It is not necessary that you need to have the SQS
also. This is an example that we are discussing about. And the SQS shares those message via a post request to the
HTTP path over the worker environment. And there are many case studies also with respect to this kind of an
environment that is being created that is being done on many customers uh and you can search for these kind of a case
studies available on the internet. Now the worker environment executes the task given by the SQS uh with the HTTP
response after the operation is completed. Now here what happens is a quick recap. The client request for an
access of an application to a web server using an HTTP request. The web server passes that request to the Q service.
The Q service shares the message with a worker. Probably a worker might be uh the manual worker and generally it's an
automated worker. So it would be shared via the worker environment only. And the worker sends back the response with the
HTTP response back to the queue. that response can be viewed directly from the Q service by the client using the web
server. So this is one of the example. Likewise, as I said that there can be many other examples also where you can
have the worker environments defined. Now what are the companies that are using the elastic beanto? These are few
of the companies that are primarily using Zillow, Jelly button games. Uh then you have League of Women Voters,
EUR. uh these are some of the few listed companies uh and obviously you search on the AWS site and you'll find many more
organizations that are using the elastic beantock primarily for uh deploying their applications. Now the next thing
is to go with the practicals that how actually we use the elastic beanto. So let's look into the demo using the AWS
elastic beanto. Now first you have to login into the AWS console and I'm sure that you might be having the accounts
created or you can use the IM credentials as well and then you have to select the region also. Now I am in the
North Virginia region. Likewise you can uh select any of the regions that are listed here. Now click on the services
and you have to search for the elastic beanto. You can find the elastic beanto under the compute section. So here
itself uh you'll find the elastic beanto as a service. Now open this service and there it will give you an option to
create an environment. You have to specifically select an environment probably a worker environment or a web
service environment. So let's wait for the service to open. So we have the dashboard now available with us. This is
how the elastic beantock looks and this is the symbol representation of a beantock. Now what you have to do is we
have to click on get started and that will load and you have to create a web app. So instead of creating a web app,
what we'll do, we'll create a new application. So just click on create a new application. Put an application
name. Let's say we put something like XY Z. You can put any description to your application. Let's say it's a demo app.
And click on create. Now it says you have to select an environment. Uh now the environment the application name XYZ
is created. You just have to select an environment. So click on create one now. And it is going to ask you that what
kind of an environment tier you wanted to select. So as we discussed that there are two types of environments. One is
the web server and the one is the worker environment. Let's look into it. What is defined by the AWS? AWS says that it has
two types of environments tiers to support different types of web applications. Web servers are standard
applications that listen for and then process HTTP request typically over port number 80. Workers are specialized
application that have a background processing task that listens for message on an Amazon SQSQ. Workers application
post those messages to your application by using the HTTP response. So that's what we saw in the case of the Beantock
slides also. Now the usability of a worker environment can be anything. Now we'll do a demo for creating a web
server environment. So just click on select and uh you we have the environment name created. Now we can
define our own domain. It ends with the region.elasticbeantock.com. Let's say I look for a domain which is XYZ only.
That's the environment name. Now I'll check for the availability whether that domain name is available with us or not.
And it says we don't have that domain name. So probably I'll try to make it with some other name. And let's look for
the availability XYZ ABC. And it says yes, it is available. Now once I deploy my application I would be able to access
the application using this complete DNS. So you can put a description uh it's a demo app that we are creating and uh
then you have to define a platform as well. Now these are the platforms that are supported by the AWS. Let's say I
wanted to run a NodeJS environment. So I'll just click on the node NodeJS platform. The application codes is
something which is basically developed by the developers and uh you can upload the application right now or you can do
that later as well once the environment is ready. Now either you can select to create an environment if you wanted to
go with all the default settings otherwise if you wanted to customize it more you can click on configure more
options. So let's click on configure more options and here you would be able to define various different features
like the type of an instance for example what kind of an EC2 instance or a server that should be running so that the
beanto can deploy your applications over it. If you wanted to modify just click on a modify button and here you can
modify your instances with respect to the storage as well. Now apart from that if you wanted to do some modification in
the case of monitoring in the case of databases in the case of security or in the case of a capacity let's look into
the capacity. So here you can actually do the modification. So in the capacity you can select the instance type also.
By default it is T2.micro but in case if your application requires a larger type of an instance then you can actually go
for the instance type as well. Similarly, you can define your AMI ids also because obviously for the
application to run, you would require the operating system also. So, you can select that particular AMI ID for your
operating system as well. Let's cancel that. Likewise, you have uh many other features uh that you can actually define
here from the dashboard and you don't have to go to the EC2 dashboard to do the modifications. Now, let's go and
create an environment. Let's assume that we are going with the default configuration. So, this is going to
create our environment. the environment is being created and you can get the environment and the logs defined in the
dashboard itself. So you'll see that the beantock environment is being initiated uh the environment is being started and
in case if there would be any errors or if it is deployed correctly you'll get all the logs here itself. Now the
environments are basically colorcoded. So there are different color codings that are defined. If you get the
environment in a green color that means everything is good to go. So here you can see that uh it has uh created an
elastic IP it has uh checked the health of the environment. Now it has created the security groups and that would be an
auto security groups created by the beanto and the environment uh creation has been started. You can see that uh
elastic beanto as Amazon S3 storage bucket for your environment data as well. This is the URL through which you
will be accessing the environment. But right now we cannot do that since the environment is being created. Let's
click on the application name. And here you can see that it is in a gray color. That means right now the build is being
done. Uh it is being created. Once it will be successfully created, it should change to the green color. And then we
will be able to access our environment using the URL. Now if I move to the EC2 instances and see in the EC2 dashboard,
if I see whether the instance is being created by the beanto or not. So let's see and let's see what are the
differences in terms of creating an instance manually and getting it created from the beanto. So click on the EC2.
Let's go to the old EC2 experience. That's what we are familiar with and let's see what's there in the dashboard.
So here you can see one running instance. Let's open that. And the XYZ environment which was created from the
beantock is being initiated. The instance is being initiated and that is something which is being done by the
beantock itself. We have not gone to the dashboard and created it manually. Now in the security groups if you'll see
that here the AWS beanto security groups are defined. It has the elastic IPs also defined. So everything is being created
by the beanto itself. Right now let's go back to the beantock and let's look into the status of our environment whether
the color coding has been changed from gray to green or not. And here you can see the uh environment is successfully
created and we have that environment colored in green. We'll access the environment and it says it's a web
server environment. It's uh platform is NodeJS running on 64-bit Amazon Linux AMI and it's a sample sample
application. Health status is okay. Now the other thing is that if you do not want to use the web console the
management console to access the beanto then the beantock offers you the elastic beanto cla as well. So you can install
the command line interface and then you have the command references CLI command references that you can actually play
with and get your applications deployed using the beanto itself. So this is uh one of the sample CLI commands that you
can actually look into. Now let's look into the environment. Let's click on the environment and we'll be represented
with the URL. It says health is okay. Uh these are the logs that you have to follow in case if there are any issues.
The platform is NodeJS. That is what we selected. Now the next thing is you just have to upload and deploy your
applications. So just click on upload and deploy. Select the version label or the name. Select file and wherever your
application is hosted at just select that upload it and deploy your application. You'll see that the like
your environment is created. Similarly your application will be deployed automatically on the instance and from
this URL you'll be able to view the output. It is as simple as just like you have to follow these four steps. Now
let's see whether the NodeJS environment is running on our instance before deploying an application. Uh so we'll
just click on this URL. Since the beantock has already opened up the security groups on HTTP port 80 for all
uh we can actually view that output directly from the URL. So we have the NodeJS running that's visible here and
after that you just have to upload and deploy your application and then from that URL you can get the output. Now
this URL you can map it with the root 53 service. So using the root 53 DNS services the domain names can be pointed
to the elastic beanto URL and from there it can be pointed to the applications that are running on the EC2 instance.
Whether you wanted to point it to the URL directly using the mean stock, you can do that. Otherwise, as we saw in the
slides, you can use the root 53, point it to the load balancer and then point it to the instances directly also once
it is created by the beantock. So that was the demo guys uh with respect to the beanto and how we can actually run the
environments. Apart from that the operational task like uh system operations, you can manage all these
things from the environment dashboard itself. So you have the configurations, you have the logs, you can actually
check the health status of your environment, you can do the monitoring and you can actually get the alarms and
the events here. So let's say if I wanted to if I wanted to see the logs, I can request for the logs here itself and
I'll be represented with the full log report and I can now download that log file and I can view the logs. So it's in
the so we have this bundle logs in the zip file. All right. So if you want to see some kind of uh logs with respect to
elastic beantock activity, it's in the form of a notepad and here you can see what all configurations the beanto has
done on your environment on your instance. Similarly you can go for the health monitoring alarms events and all
those things. If getting your learning started is half the battle, what if you could do that for free? Visit skill up
by simply learn. Click on the link in the description to know more. Hi, this is the fourth lesson of the AWS
solutions architect course. Migrating to the cloud doesn't mean that resources become completely separated from the
local infrastructure. In fact, running applications in the cloud will be
completely transparent to your end users. AWS offers a number of services to fully and seamlessly integrate your
local resources with the cloud. One such service is the Amazon virtual private cloud.
This lesson talks about creating virtual networks that closely resemble the ones that operate in your own data centers,
but with the added benefit of being able to take full advantage of AWS. So, let's get started.
[Music] In this lesson, you'll learn all about virtual private clouds and understand
their concept. You'll know the difference between public, private, and elastic IP
addresses. You'll learn about what a public and private subnet is. And you'll understand what an
internet gateway is and how it's used. You'll learn what root tables are and when they are used. You'll
understand what a net gateway is. We'll take a look at security groups and their
importance. And we'll take a look at network ACL and how they're used in Amazon
VPC. We'll also review the Amazon VPC best practices and also the costs associated with running a VPC in the
Amazon cloud. Welcome to the Amazon virtual private cloud and subnet section. In
this section, we're going to have an overview of what Amazon VPC is and how you use it. And we're also going to have
a demonstration of how to create your own custom virtual private cloud. We're going to look at IP addresses and the
use of elastic IP addresses in AWS. And finally, we'll take a look at subnets. And there'll be a demonstration about
how to create your own subnets in an Amazon VPC. And here are some of the terms that are
used in VPCs. There's subnets, root tables, elastic IP addresses, internet gateways,
NAT gateways, network ACL, and security groups. And in the next sections, we're going to take a look at each of these
and build our own custom VPC that we'll use throughout this course. Amazon defines a VPC as a virtual
private cloud that enables you to launch AWS resources into a virtual network that you've defined. This virtual
network closely resembles a traditional network that you'd operate in your own data center, but with the benefits of
using the scalable infrastructure of AWS. A VPC is your own virtual network in the
Amazon cloud which is used as the network layer for your EC2 resources. And this is a diagram of the default
VPC. Now, there's a lot going on here, so don't worry about that. What we're going to do is break down each of the
individual items in this default VPC over the coming lesson, but what you need to know is that a VPC is a critical
part of the exam, and you need to know all the concepts and how it differs from your own networks. Throughout this
lesson, we're going to create our own VPC from scratch, which you'll need to replicate at the end of this so you can
do well in the exam. Each VPC that you create is logically isolated from other virtual networks in the AWS cloud. It's
fully customizable. You can select the IP address range, create subnets, configure root tables, set up network
gateways, define security settings using security groups and network access control lists.
So each Amazon account comes with a default VPC that's preconfigured for you to start using straight away. So you can
launch your EC2 instances without having to think about anything we mentioned in the opening section. A VPC can span
multiple availability zones in a region. And here's a very basic diagram of a VPC. It isn't this simple in reality.
And as we saw in the first section, here's the default Amazon VPC, which looks kind of
complicated. But what we need to know at this stage is that the C block for the default VPC is always a 16 subnet mask.
So in this example, it's 172.31.0.0/16. What that means is this VPC will provide up to
65,536 private IP addresses. So in the coming sections, we'll take a look at all of these different items that you
can see on this default VPC. But why wouldn't you just use the default VPC? Well, the default VPC is great for
launching new instances when you're testing AWS. But creating a custom VPC allows you to make things more secure
and you can customize your virtual network as you can define your own IP address range. You can create your own
subnets that are both private and public and you can tighten down your security settings. By default, instances that you
launch into a VPC can't communicate with your own network. So you can connect your VPCs to your existing data center
using something called hardware VPN access. so that you can effectively extend your data center into the cloud
and create a hybrid environment. Now to do this you need a virtual private gateway and this is the
VPN concentrator on the Amazon side of the VPN connection. Then on your side in your data center you need a customer
gateway which is either a physical device or a software application that sits on your side of the VPN connection.
So when you create a VPN connection, a VPN tunnel comes up when traffic is generated from your side of the
connection. VPC pairing is an important concept to understand. A peering connection can be made between your own
VPCs or with a VPC in another AWS account as long as it's in the same region.
So what that means is if you have instances in VPCA A, they wouldn't be able to communicate with instances in
VPC B or C unless you set up a peering connection. Peering is a onetoone relationship. A VPC can have multiple
peering connections to other VPCs. But and this is important, transitive peering is not supported. In other
words, VPC A can connect to B and C in this diagram, but C wouldn't be able to communicate with B unless they were
directly paired. Also, VPCs with overlapping CIRs cannot be paired. So, in this diagram, you can see they all
have different IP ranges, which is fine, but if they had the same IP ranges, they wouldn't be able to be paired.
And finally for this section, if you delete the default VPC, you have to contact AWS support to get it back
again. So be careful with it and only delete it if you have good reason to do so and know what you're doing. This is a
demonstration of how to create a custom VPC. So, here we are back at the Amazon Web
Services Management Console, and this time we're going to go down to the bottom left where the networking section
is. I'm going to click on VPC, and the VPC dashboard will load up. Now, there's a couple of ways you can
create a custom VPC. There's something called the VPC wizard which will build VPCs on your
behalf from a selection of different configurations. For example, a VPC with a single public subnet or a VPC with
public and private subnets. Now, this is great cuz you click a button, type in a few details, and it does the work for
you. However, you're not going to learn much or pass the exam if this is how you do it. So, we'll cancel that and we'll
go to your VPCs and we'll click on create a VPC. And we're presented with the create
a VPC window. So, let's give our VPC a name. I'm going to call it simply learn_VPC. And this is the kind of
naming convention I'll be using throughout this course. Next, we need to give it the CI block or the classless
interdommain routing block. So, we're going to give it a very simple one. 10.0.0.0 and then we need to give it the
subnet mask. So you're not allowed to go larger than 15. So if I tried to put 15 in, it says no, not going to happen. For
reference, subnet mask of 15 would give you around 131,000 IP addresses and subnet 16 will give you
65,536, which is probably more than enough for what we're going to do. Next, you get to choose the tenency.
There's two options, default and dedicated. If you select dedicated, then your EC2 instances will reside on
hardware that's dedicated to you. So, your performance is going to be great, but your cost is going to be
significantly higher. So, I'm going to stick with default. And we just click on yes,
create. It'll take a couple of [Music] seconds. And then in our VPC dashboard,
we can see our SimplyLearn VPC has been created. Now, if we go down to the bottom here to see the information about
our new VPC, we can see it has a root table associated with it, which is our default route
table. So, there it is. And we can see that it's only allowing local traffic at the
moment. We go back to the VPC again. We can see it's been given a default network
ACL and we'll click on that and have a look. And you can see this is very similar to what we looked at in the
lesson. So it's allowing all traffic from all sources inbound and outbound. Now, if we go to the subnet
section and just widen the VPC area here, you can see there's no subnets associated with the VPC we've just
created. So, that means we won't be able to launch any instances into our VPC. And to prove it, I'll just show
you. We'll go to the EC2 section. So, this is a glimpse into your future. This is what we'll be looking at
in the next lesson. And we'll just quickly try and launch an instance. We'll select any instance. It doesn't
matter. Any size, not important. So here the network section, if I try and select simply learn VPC, it's saying no subnets
found. This is not going to work. So we basically need to create some subnets in our VPC.
And that is what we're going to look at in the next lesson. Now, private IP addresses are IP
addresses that are not reachable over the internet and they're used for communication between instances in the
same network. When you launch a new instance, it's given a private IP address and an internal DNS host name
that resolves to the private IP address of the instance. But if you want to connect to this from the internet, it's
not going to work. So then you'd need a public IP address which is reachable from the internet. You can use public IP
addresses for communication between your instances and the internet. Each instance that receives a public IP
address is also given an external DNS host name. Public IP addresses are associated with your instances from the
Amazon pool of public IP addresses. When you stop or terminate your instance, the public IP address is
released and a new one is associated when the instance starts. So if you want your instance to retain its public IP
address, you need to use something called an elastic IP address. An elastic IP address is a static or persistent
public IP address that's allocated to your account and can be associated to and from your instances as required. An
elastic IP address remains in your account until you choose to release it. There is a charge associated with an
elastic IP address if it's in your account but not actually allocated to an instance. This is a demonstration of how
to create an elastic IP address. So, we're back at the Amazon Web Services Management Console. We're
going to head back down to the networking VPC section. and we'll get to the VPC dashboard. On
the left hand side, we'll click on elastic IPs. Now, you'll see a list of any
elastic IPs that you have associated in your account. And remember, any the elastic IP address that you're using
that isn't allocated to something you'll be charged for. So, I have one available and that is allocated to an instance
currently. So, we want to allocate a new address. And it reminds you that there's a charge if you're not using it. I'm
saying yes, allocate. And it takes a couple of seconds. And there's our new elastic IP
address. Now, we'll be using this IP address to associate with the NAT gateway when we build that.
AWS defines a subnet as a range of IP addresses in your VPC. You can launch AWS resources into a
subnet that you select. You can use a public subnet for resources that must be connected to the internet and a private
subnet for resources that won't be connected to the internet. The net mask for the default subnet in your VPC is
always 20, which provides up to 4,096 addresses per subnet, and a few of them are reserved for AWS use.
A VPC can span multiple availability zones, but a subnet is always mapped to a single availability zone. This is
important to know. So, here's our basic diagram, which we're now going to start adding to. So, we can see the virtual
private cloud, and you can see the availability zones. And now, inside each availability zone, we've rated a subnet.
Now, you won't be able to launch any instances unless there are subnets in your VPC. So, it's good to spread them
across availability zones for redundancy and failover purposes. There's two different types of
subnet, public and private. You use a public subnet for resources that must be connected to the internet, for example,
web servers. A public subnet is made public because the main route table sends the subnet's traffic that is
destined for the internet to the internet gateway. And we'll touch on internet gateways next. Private subnets
are for resources that don't need an internet connection or that you want to protect from the internet. For example,
database instances. So, in this demonstration, we're going to create some subnets, a
public and a private subnet, and we're going to put them in our custom VPC in different availability
zones. So, we'll head to networking and VPC. Wait for the VPC dashboard to load
up. We'll click on subnets. We'll go to create subnet. And I'm going to give the subnet
a name. So it's good to give them meaningful names. So I'm going to call this first one for the public subnet
10.0 1.0. And I'm going to put this one in the US
East one B availability zone. And I'm going to call it simply
learn public. So it's quite a long name I understand, but at least it makes it clear for what what's going on in this
example. So we need to choose a VPC. So we obviously want to put it in our SimplyLearn
VPC. and I said I wanted to put it in US East 1B. I'm using the North Virginia region, by the way. So, we click on
that. Then, we need to give it the C block. Now, as I mentioned earlier when I typed in the name, that's the range I
want to use. And then we need to give it the subnet mask. And we're going to go with 24, which should give us 251
addresses in this range, which obviously is going to be more than enough. If I try and put a different value in that's
unacceptable to Amazon, it's going to say, it's going to give me an error and tell me not to do that. Let's go back to
24 and click kind of cut and paste this, by the way, just cuz I need to type something very similar for the next
one. Click create. It takes a few seconds. Okay, so there's our new
subnet. And if I just widen this You can see so that's the IP range that's the availability zone it's for
simply learn and it's public so now we want to create the private so put the name in I'm going to
give the private the IP address block of that I'm going to put this one in US east1C and it's going to be the private
subnet obviously I want it to be in the name VPC availability zone of US East
1C and we're going to give it 10.0.2.0-24 and we'll click yes
create again it takes a few seconds. Okay. So, we sort by name. So, there we are. We can see now
we've got our private subnet and our public subnet. In fact, let me just type in simply learn. There we are. So, now
you can see them both there. And you can see they're both in the same
VPC. Simply learn VPC. Now, if we go down to the bottom, you can see the root table associated
with these VPCs. And you can see that they can communicate with each other
internally, but there's no internet access. So that's what we need to do next. In the next lesson, you're going
to learn about internet gateways and how we can make these subnets have internet access. Welcome to the networking
section. In this section, we're going to take a look at internet gateways, root tables, and Knack devices. and we'll
have a demonstration on how to create each of these AWS VPC items. So to allow your VPC the ability to
connect to the internet, you need to attach an internet gateway. And you can only attach one internet gateway per
VPC. So attaching an internet gateway is the first stage in permitting internet access to instances in your VPC. Now
here's our diagram again. And now we've added the internet gateway which is providing the connection to the internet
to your VPC. But before you can configure internet correctly, there's a couple more
steps. For an EC2 instance to be internet connected, you have to adhere to the following rules. Firstly, you
have to attach an internet gateway to your VPC, which we just discussed. Then you need to ensure that
your instances have public IP addresses or elastic IP addresses so they're able to connect to the internet. Then you
need to ensure that your subnet's root table points to the internet gateway. And you need to ensure that your network
access control and security group rules allow relevant traffic to flow to and from your instance. So you need to allow
the rules to let in the traffic you want. For example, HTTP traffic. After the demonstration for this section,
we're going to look at how root tables, access control lists, and security groups are used.
In this demonstration, we're going to create an internet gateway and attach it to our custom
VPC. So, let's go to networking, bring up the VPC dashboard, and on the left hand side, we click on internet
gateways. So, here's a couple of internet gateways I have already. Um but I need to create a new one. So create
internet gateway. I'll give it a name which is going to be simply learn internet gateway IGW and I'm going
to click create. So this is an internet gateway which will connect a VPC to the internet because at the moment our
custom VPC has no internet access. So there it's created simply then
igw but it state is detached because it's not attached to anything. So let me try and attach it to a VPC and it gives
me an option of all the VPCs that have no internet gateway attached to them currently. So I only have one which is
simply VPC. Yes, attach. Now you can see our VPC has internet attached and you can see that
down here. So let's click on that and it will take us to our
VPC. But before any instances in our VPC can access the internet, we need to ensure that our subnet root table points
to the internet gateway. And we don't want to change the main route table. We want to create a custom route table. And
that's what you're going to learn about next. A root table determines where network traffic is directed. It does
this by defining a set of rules. Every subnet has to be associated with a root table and a subnet can only be
associated with one root table. However, multiple subnets can be associated with the same root
table. Every VPC has a default route table and it's good practice to leave this in its original state and create a
new route table to customize the network traffic roots associated with your VPC. So here's our example and we've added
two root tables. The main route table and the custom root table. The new route table or the custom
root table will tell the internet gateway to direct internet traffic to the public subnet. But the private
subnet is still associated to the default route table, the main route table, which does not allow internet
traffic to it. All traffic inside the private subnet is just remaining local. In this demonstration, we're going to
create a custom root table, associate it with our internet gateway, and associate our public subnet with
it. So, let's go to networking and VPC. The dashboard will load and we're going to go to route tables.
Now, our VPC only has its main route table at the moment, the default one it was given at the time it was created.
So, we want to create a new route table and we want to give it a name. So, we're going to call it simply
learning to call it root table, RTB for short. And then we get to pick which VPC we want to put it in. So, obviously we
want to use simply learn VPC. So we click create. Should take a couple of seconds.
And here you are. Here's our new root table. So what we need to do now is change its root so that it points to the
internet gateway. So if we go down here to roots at a minute, you can see it's just
like our main route table. It just has local access. So we want to click on edit and we want to add another route.
So the destination is the internet which is all the zeros and our target and we click on this it gives us the option of
our internet gateway which we want to do. So now we have internet access to this subnet sorry to this route
table and we click on save. Save was successful. So now we can see that as well as local access we have
internet access. Now at the moment if we click on subnet associations you do not have any subnet
associations. So basically both both our subnets the public and private subnets are associated with the main route table
which doesn't have internet access. So we want to change this. So we'll click on edit and we want our public subnet to
be associated with this root table. So click on save. So it's just saving
that. So now we can see that our public subnet is associated with this route table and this route table is associated
with the internet gateway. So now anything we launch into the public subnet will have internet
access. But what if we wanted our instances in the private subnet to have internet access? Well, there's a way of
doing that with a NAT device and that's what we're going to look at in the next lecture.
You can use a NAT device to enable instances in a private subnet to connect to the internet or other AWS services
but prevent the internet from initiating connections with the instances in the private subnet. So we talked earlier
about public and private subnets to protect your assets from be directly connected to the internet. For example,
your web server would sit in the public subnet and your database in the private subnet which has no internet
connectivity. However, your private subnet database instance might still need internet access or the ability to
connect to other AWS resources. If so, you can use a network address translation device or a net device to do
this. and that device forwards traffic from your private subnet to the internet or other AWS services and then sends the
response back to the instances. When traffic goes to the internet, the source IP address of your instance is replaced
with the net device address. And when the internet traffic comes back again, the net device translates the address to
your instance's private IP address. So here's our diagram, which is getting ever more complicated. And if
you look in the public subnet, you can see we've now added a NAT device. And you have to put NAT devices in the
public subnet so that they get internet connectivity. AWS provides two kinds of NAT devices, a NAT gateway and a NAT
instance. AWS recommends a NAT gateway as it's a managed service that provides better availability and bandwidth than
NAT instances. Each NAT gateway is created in a specific availability zone and is implemented with redundancy in
that zone. ANAT instance is launched from a NAT AMI, an Amazon machine image, and runs as an instance in your VPC. So,
it's something else you have to look after. Whereas a NAT gateway being a fully managed service means once it's
installed, you can pretty much forget about it. And that gateway must be launched into a
public subnet because it needs internet connectivity. It also needs an elastic IP address which you can select at the
time of launch. Once created, you need to update the root table associated with your private subnet to point
internetbound traffic to the NAT gateway. This way the instances in your private subnets can communicate with the
internet. So if you remember back to the diagram when we had the custom root table which was pointed to the internet
gateway. Now we're pointing our main root table to the net gateway so that the private subnet also gets internet
access but in a more secure manner. Welcome to the create a gateway demonstration where we're going to
create a net gateway so that the instances in our private subnet can get internet access.
So we'll start by going to networking and VPC. And the first thing we're going to
do is take a look at our subnets. And you'll see why shortly. So here are our simply learn subnets. So this is the
private subnet that we want to give internet access. But if you remember from the section, NAT gateways need to
be placed in public subnets. So I'm just going to copy the name of this subnet ID for the public subnet and you'll see
why in a moment. So then we go to NAT gateways on the left hand side and we want to create a new NAT
gateway. So we have to put a subnet in there. So we want to choose our public subnet. Now as you can see it truncates
a lot of the subnet names on this option. So it's a bit confusing. So, we know that we want to put it in our
SimplyLearn VPC in the public subnet, but you can see it's truncated. So, it's actually this one at the bottom. But
what I'm going to do is just paste in the subnet ID which I copied earlier so there's no
confusion. Then we need to give it an elastic IP address. Now, if you remember from the earlier demonstration, we
created one. So, let's select that. But if you hadn't allocated one, you could click on the create new EIP button.
So we'll do that. Okay. So it's telling me my NAT gateway has been created. And in order
to use your net gateway, ensure that you edit your root table to include a route with a target of and then our NAT
gateway ID. So it's given us the option to click on our edit root tables. So we'll go
straight there. Now here's our here's our root tables. Now here's the custom root table that we
created earlier and this is the default the main route table which was created when we launched our when we created our
VPC. So we should probably give this a name so that we know what it is. So let me just call this simply learn
RTB main. So now we know that's our main route table.
So if you take a look at the main route table and the subnet associations, you can see that our
private subnet is associated with this table. So what we need to do is put a route in here that points to the net
gateway. So if we click on roots and edit and we want to add another route and we want to say that all
traffic can either go to the simply an internet gateway which we don't want to do. We
want to point it to our NAT instance which is this NAT ID here and we click save.
So now any instances launched in our private subnet will be able to get internet access via our NAT
gateway. Welcome to the using security groups and network ACL section. In this section, we're going to take a look at
security groups and network ACLs, and we're going to have a demonstration on how you create both of these items in
the Amazon Web Services console. A security group acts as a virtual firewall that controls the
traffic for one or more instances. You add rules to each security group that allow traffic to or from its associated
instances. Basically, a security group controls the inbound and outbound traffic for one or more EC2
instances. Security groups can be found on both the EC2 and VPC dashboards in the AWS web management console. We're
going to cover them here in this section, and you'll see them crop up again in the EC2
lesson. And here is our diagram, and you can see we've now added security groups to it. And you can see that EC2
instances are sitting inside the security groups and the security groups will control what traffic flows in and
out. So let's take a look at some examples and we'll start with a security group for a web server. Now obviously a
web server needs HTTP and HTTPS traffic as a minimum to be able to access it. So here is an example of the security group
table and you can see we're allowing HTTP and HTTPS the ports that are associated with
those two and the sources and we're allowing it from the internet. We're basically allowing all traffic to those
ports and that means any other traffic that comes in on different ports would be unable to reach the security group
and the instances inside it. Let's take a look at an example for a database server security group. Now
imagine you have a SQL server database. Then you would need to open up the SQL server port so that people can access it
um which is port 1433 by default. So we've added that to the table and we've allowed the source to come from the
internet. Now because it's a Windows machine, you might want RDP access so you can log on and do some
administration. So we've also added RDP access to the security group. Now, you could leave it open to the internet, but
that would mean anyone could try and hack their way into your box. So, in this example, we've added a source IP
address of 10.0.0.0. So, only IP ranges from that address can RDP to the
instance. Now, there's a few rules associated with security groups. By default, security groups allow all
outbound traffic. So if you want to tighten that down, you can do so in a similar way to you can define the
inbound traffic. Security group rules are always permissive. You can't create rules that deny access. So you're
allowing access rather than denying it. Security groups are stateful. So if you send a request from your instance, the
response traffic for that request is allowed to flow in regardless of the inbound security group rules. And you
can modify the rules of a security group at any time and the rules are applied immediately. Welcome to the create
security group demonstration where we're going to create two security groups. One host DB servers and Wonder Host web
servers. Now, if you remember from the best practices section, it said it was always a good idea to tier your
applications into security groups. And that's exactly what we're going to do. So if we go to networking and VPC to
bring up the VPC dashboard on the left hand side under security we click on security groups.
Now you can also get to security groups from the EC2 dashboard as well. So, here's a list of my existing security
groups, but we want to create a new security group. And we're going to call it simply
learn web server SG security group. And we'll give the group name as the same. And our
description is going to be simply learn web servers security groups.
Okay. And then we need to select our VPC. Now, it defaults to the default VPC, but obviously we want to put it in
our SimplyLearn VPC. So, we click yes, create. Takes a couple of seconds, and there it is. There's our
new security group. Now, if we go down to the rules, the inbound rules, you can see there are none. So, by default, a
new security group has no inbound rules. But what about outbound rules? If you remember from the
lesson, a new security group by default allows all traffic to be outbound. And there you are. All traffic has
destination of everywhere. So all traffic is allowed. But we want to add some rules. So let's click on inbound
rules. Click on edit. Now this is going to be a web server. So if we click on the dropown, we need to give it HTTP. So
you can either choose custom TCP rule and type in your own port ranges or you can just use the ones they have for you.
So HTTP this pre-populates the port range. And then here you can add the source.
Now if I click on it, it's given me the option of saying allow access from different security groups. So you could
create a security group and say I only accept traffic from a different security group, which is a nice way of securing
things down. You could also put in here just your IP address so that only you could do HTTP requests to the instance.
But because it's a web server, we want people to be able to see our website. Otherwise, it's not going
to be much use. So, we're going to say all traffic. So, all source traffic can access our instance on port HTTP80.
I want to add another rule because we also want to do HTTPS which is hiding from
me. There we are. And again, we want to do the same. And also because this is going to be a Linux instance, we want to
be able to connect to the Linux instance to do some work and configuration. So, we need to give it SSH access. And
again, it would be good practice to tide it down to your specific IP or an IP range, but we're just going to do all
for now. Then we click on save. And there we are. There we have our ranges. So now we want to create our
security group for our DB servers. So let's click create security group. And then we'll go through and give it a
similar name. Simply learn DB servers SG. And the description is going to be
SimplyLearn DB servers security group. And our VPC is obviously going to be SimplyLearn VPC. So let's click yes
create. Wait a few seconds. And here's our new security group. As you can see, it has no inbound
rules by default and outbound rules allow all traffic. So this is going to be a SQL Server
database server and so we need to allow SQL Server traffic into the instance. So we need to give it Microsoft SQL port
access. Now the default port for Microsoft SQL Server is 1433. Now in reality I'd probably change
the port the SQL Server was running on to make it more secure but we'll go with this for now. And then the source. So we
could choose the IP ranges again, but what we want to do is place the DB server in the private subnet and allow
the traffic to come from the web server. So the web server will accept traffic and the web server will then go to the
database to get the information it needs to display on its web on the website or if people are entering information into
the website, we want the the information to be stored in our DB server. So basically we want to say that this the
DB servers can only accept SQL server traffic from the web server security group. So we can select the simply learn
web server security group as the source traffic for Microsoft SQL server data. So we'll select that. Now our SQL server
is obviously going to be a Windows instance. So from time to time we might not we might need to log in and
configure it. So, we want to give RDP access. Now, again, you would probably put a specific IP range in there. We're
just going to do all traffic for now. Then, we click save. And there we are. So, now we have
two security groups, DB servers and web servers. A network ACL is a network access
control list and it's an optional layer of security for your VPC that acts as a firewall for controlling traffic in and
out of one or more of your subnets. You might set up network ACL with rules similar to your security
groups in order to add an additional layer of security to your VPC. Here is our network diagram and we've added
network ACL to the mix. Now you can see they sit somewhere between the root tables and the
subnets. This diagram makes it a little bit clearer and you can see that a network ACL sits in between a root table
and a subnet. And also you can see an example of the default network ACL which is configured to allow all
traffic to flow in and out of the subnets to which it's associated. Each network ACL includes a rule whose rule
number is an asterisk. This rule ensures that if a packet doesn't match any of the other numbered rules, it's denied.
You can't modify or remove this rule. So if you take a look at this table, you can see on the inbound some traffic
would come in and it would look for the first rule which is 100. And that's saying I'm allowing all traffic from all
sources. So that's fine. The traffic comes in. If that rule 100 wasn't there, it would go to the asterisk rule. And
the asterics rule is saying traffic from all sources is denied. Let's take a look at the network
ACL rules. Each subnet in your VPC must be associated with an ACL. If you don't assign it to a custom ACL, it will
automatically be associated to your default ACL. A subnet can only be associated with one ACL. However, an ACL
can be associated with multiple subnets. An ACL contains a list of numbered rules which are evaluated in order starting
with the lowest. As soon as a rule matches traffic, it's applied regardless of any higher numbered rules that may
contradict it. AWS recommends incrementing your rules by a factor of 100, so there's plenty of room to
implement new rules at a later date. Unlike security groups, ACL's are stateless. Responses to allowed inbound
traffic are subject to the rules for outbound traffic. Welcome to the network ACL
demonstration where we're just going to have an overview of ACL's where they are in the
dashboard. Now, you don't need to know a huge amount about them for the exam. You just need to know how they work and
where they are. So let's go to networking and VPC and on when the dashboard loads on
the left hand side under security there's network ACL's. So let's click on that. Now you can see some ACL that are
in my my AWS account. So we want the one that's associated with our simply learn VPC. So if we extend this
VPC column that's our network ACL there. Simple VPC. Now, let's give it a name
because it's not very clear to see otherwise. Also, I'm kind of an obsessive tagger. So, let's call it
simply learn ACL and click on the tick. So, there we are. So, now it's much easier to see. So, we click on inbound
rules. So, this is exactly what we showed you in the lesson. The rule is 100. So, that's the first rule that's
going to get evaluated and it's saying allow all traffic from all sources. And the outbound rules are the same.
So if you wanted to tighten down the new rule, you could click edit. We would give it a new rule number, say, which
would be 200. So you should always increment them in 100. So that means if you had 99 more rules you needed to put
in place, you'd have space to put them in in between these two. And then you could do whatever you wanted. You could
say, you know, we are allowing HTTP access from all traffic and we're allowing. Or
you could say actually you know what we're going to deny it. So this is the way of blacklisting traffic into your
VPC. Now I'm not going to save that cuz we don't need it. But this is where network ACL sit and this is where you
would make any changes. It's also worth having a look at the subnet associations with your ACL. So we have two subnets in
our SimplyLearn VPC. So we would expect to see both of them associated with this network ACL because it's the
default. And there they are. There both our public and our private subnets are associated. And you can also see up here
on the on the dashboard it says default. So this is telling us this is our default
ACL. If you did want to create a new network ACL, you would click create network ACL. You'd give it a name. Let's
just say new ACL. And then you would associate it with your VPC. So we would say simply learn VPC.
takes a few seconds and there we are. There we have our new one. Now you can see this one says default no because it
obviously isn't the default ACL for our simply learn VPC and it has no subnets associated
with it. So let's just delete that because we don't need it. But there you are. There's a very brief overview of
network ACL. Welcome to the Amazon VPC best practices and costs where we're going to take a
look at the best practices and the costs associated with the Amazon virtual private
cloud. Always use public and private subnets. You should use private subnets to secure resources that don't need to
be available to the internet such as database services. To provide secure internet access to the instances that
reside in your private subnets, you should provide a NAT device. When using NAT devices, you
should use a NAT gateway overn because they're a managed service and require less administration
effort. You should choose your CI blocks carefully. Amazon VPC can contain from 16 to
65,536 IP addresses. So you should choose your CI block according to how many instances you think you'll need.
You should also create separate Amazon VPCs for development, staging, test, and production or create one Amazon VPC with
separate subnets with a subnet each for production, development, staging, and test. You should understand the Amazon
VPC limits. There are various limitations on the VPC components. For example, you're allowed five VPCs per
region, 200 subnets per VPC, 200 root tables per VPC, 500 security groups per
VPC, 50 in and outbound rules per VPC. However, some of these rules can be increased by raising a ticket with AWS
support. You should use security groups and network ACL's to secure the traffic
coming in and out of your VPC. Amazon advises to use security groups for whitelisting traffic and network ACL's
for blacklisting traffic. Amazon recommends tiering your security groups. You should create
different security groups for different tiers of your infrastructure architecture inside VPC. If you have web
tiers and DB tiers, you should create different security groups for each of them.
Creating tier wise security groups will increase the infrastructure security inside the Amazon VPC. So if you launch
all your web servers in the web server security group, that means they'll automatically all have HTTP and HTTPS
open. Conversely, the database security group will have SQL server ports already open. You should also standardize your
security group naming conventions. Following a security group naming convention allows Amazon VPC operation
and management for large-scale deployments to become much easier. Always span your Amazon VPC
across multiple subnets in multiple availability zones inside a region. This helps in architecting high availability
inside your VPC. If you choose to create a hardware VPN connection to your VPC using virtual
private gateway, you are charged for each VPN connection hour that your VPN connection is provisioned and available.
Each partial VPN connection hour consumed is built as a full hour. You'll also incur standard AWS data transfer
charges for all data transferred via the VPN connection. If you choose to create a
net gateway in your VPC, you are charged for each NAT gateway hour that your NAT gateway is provisioned and available.
Data processing charges apply for each gigabyte processed through the NAT gateway. Each partial NAT gateway hour
consumed is built as a full hour. This is the practice assignment for designing a custom VPC where you'll
create a custom VPC using the concepts learned in this lesson. Using the concepts learned in this
lesson, recreate the custom VPC as shown in the demonstrations. The VPC name should be
simply learn VPC. The CI block should be 10.0.0.0/16. There should be two subnets, one public with a range of
10.0.1.0 and one private with a range of 10.0.2.0. And they should be placed in separate availability zones.
There should be one internet gateway and one NAT gateway and also one custom root table for the public subnet. Also create
two security groups. Simply learn web server security group and simply learn DB server security
group. So let's review the key takeaways from this lesson. Amazon virtual private cloud or VPC
enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a
traditional network that you'd operate in your own data center, but with the benefits of using scalable
infrastructure of AWS. There are three types of IP address in
AWS. A private IP address. This is an IP address that's not reachable over the internet and it's used for communication
between instances in the same network. A public IP address is reachable from the internet which you can use for
communication between your instances and the internet. And there's an elastic IP address. This is a static public
persistent IP address that persists after an instance restarts. Whereas a public IP address is reassociated after
each restart. Amazon defines a subnet as a range of IP addresses in your BPC. You can launch
AWS resources into a subnet that you select and a subnet is always mapped to a single availability zone. Use a public
subnet for resources that must be connected to the internet and a private subnet for resources that won't be
connected to the internet. To allow your VPC the ability to connect to the internet, you need to
attach an internet gateway to it. and you can only attach one internet gateway per
VPC. A root table determines where network traffic is directed. It does this by defining a set of rules. Every
subnet has to be associated with a root table and a subnet can only have an association with one root table.
However, multiple subnets can be associated to the same root table. And you can use a NAT device to enable
instances in a private subnet to connect to the internet or other AWS services. But a NAT device will prevent the
internet from initiating connections with instances inside your private subnet. A security group acts as a
virtual firewall that controls the traffic for one or more instances. You add rules to each security group that
allow traffic to or from its associated instances. A network access control list or network ACL is an optional layer of
security for your VPC that acts as a firewall for controlling traffic in and out of one or more of your subnets.
Today's session is on AWS SageMaker. Let's look into what we have in our today's session. So what's in it for
you? We will be covering what is AWS, why do we need AWS SageMaker, what is AWS SageMaker services, what are the
benefits of using the AWS SageMaker, machine learning with AWS SageMaker, how to train a model with AWS
SageMaker, how to validate a model with AWS and the companies that are using AWS SageMaker. Along with that, we will be
covering up one live demo on the AWS platform. Now let's understand what is AWS. So what is AWS? It's an Amazon web
services. It's a largest or uh most widely used public cloud platform offered by Amazon. It provides services
over the internet. AWS services can be used to build, monitor and deploy any type of application in the cloud. AWS
also uses the subscription pricing model. That means you only pay for whatever the services you use
for. Now why do we need AWS SageMaker? Let's look into it. So let's consider an example of one of the company that is
ProQuest. Now before AWS SageMaker, the ProQuest is a global information content and technology company that provides
valuable content such as ebooks, newspapers, etc. to the users. Before AWS SageMaker, the ProQuest requirement
was to have a better user experience, maximum relevant search results. Now, after AWS SageMaker, they were able to
achieve those uh results. Uh so they achieved more appealing video user experience. They achieved more relevant
search results for the users. Now, what do we mean by AWS SageMaker? Why this service is primarily used? So, Amazon
SageMaker is a cloud machine learning platform that helps users in building, training, tuning and deploying machine
learning models in a productionready hosted environment. So, it's kind of a machine learning service which is
already hosted on the AWS platform. Now, what are the benefits of using AWS SageMaker? Uh, the key benefits of using
AWS SageMaker are it reduces machine learning data cost. So you can do the cost
optimization while running uh this particular service on the AWS. All ML components are stored in a particular
place in a dashboard. So they can be managed together. Highly scalable. So it can be scalable on you can scale this
particular service on the fly. It trains the models quite faster. Maintains the uptime. So you can be assured that your
workloads will be running all the time. it will be available all the time. High data security. So security becomes a
major concern on the cloud platforms and it ensures that you have the high data security. Along with that you can do a
data transfer to different AWS services like S3 bucket and all with the simple data transfer
techniques. Now machine learning with AWS SageMaker let's look into it. So machine learning with AWS SageMaker is a
three-step function. So one is to build, second is to test and tune the model and third is to deploy the model. Now with
the build it provides more than 15 widely used ML algorithms for training purpose. Now to build a model you can
collect and prepare training data or you can select from the Amazon S3 bucket also. Choose and optimize the required
algorithm. So some of the algorithms that you can select are K means linear regressions, logistic regression.
SageMaker helps developers to customize ML instances with the Jupyter notebook interface. In the test and tune, you
have to set up and manage the environment for training. So you would need some sample data to train the
model. So train and tune a model with the Amazon SageMaker. SageMaker implements hyperparameter tuning by
adding a suitable combination of algorithm parameters. Also, it divides the training data and stores that in the
Amazon S3. S3 is a simple storage service which is primarily used for storing the objects and the data. Hence,
it is used for storing and recovering data over the internet. And below you can see that AWS SageMaker uses Amazon
S3 to store data as it's safe and secure. Also it divides the training data and stores in Amazon S3 where the
training algorithm code is stored in the ECR. ECR stands for elastic container registry which is primarily used for
containers and dockers. ECR helps users to save, monitor and deploy docker and the containers. Later SageMaker sets up
a cluster for the input data, trains it and stores it in the Amazon S3 itself. So this is done by the SageMaker itself.
After that you need to deploy it. So suppose you want to predict limited data at a time you use Amazon SageMaker
hosting services for that. Okay. But if you want to get prediction for an entire data set prefer using Amazon SageMaker
batch transform. Now the last step that is to deploy the model. So once tuning is done models can be deployed to
SageMaker endpoints and in the endpoint realtime prediction is performed. So you would have some data which you would
reserve and validate your model whether it is working correctly or not. Now evaluate your model and determine
whether you have achieved your business goals. Now the other aspect is how we can train a model with AWS
SageMaker. So this is basically a flow diagram which shows you how to train a model with the AWS SageMaker. And here
we have used couple of services of an AWS to get that done. So model training in AWS SageMaker
is done on machine learning compute instances. And here we can see there are two machine learning compute instances
used as helper code and the training code. Along with that we are using two S3 buckets and the ECR for the container
registry. Now let's look into what are the ways to train the model as per the slides.
So below are the following requirements to train a model. So here in the diagram you can see these are the following
requirements to train a model. The URL of an Amazon S3 bucket where the training data is stored that
is mentioned. The compute resources on machine learning compute instances. So
these are all your machine learning compute instances. Then the URL of an Amazon S3
bucket where the output will be stored. and the path of AWS elastic container registry where the code data is saved.
The inference code image lies in the elastic container registry. Now what are these called? These are called as the
training jobs. Now when a user trains a model in Amazon SageMaker, he she creates a
training job. So we need to first create a training job and then the input data is fetched from the specified Amazon S3
bucket. Once the training job is built, Amazon Sage Maker launches the ML compute instances. So these compute
instances will be launched once the training job is built. Then it trains the model with the training code and the
data set and it shows the output and model articraphs in the AWS S3 bucket. So this is done automatically. Now here
the helper code performs a task when the training code fails. The interference code which is in the elastic container
registry consists of multiple linear sequence containers that process the request for inference on data. The EC2
container registry is a container registry that helps users to say monitor and deploy container images whereas
container images are the ready applications. Once the data is trained the output is stored in the specified
Amazon S3 bucket. So here you can see the output will be stored here. To prevent your algorithm being deleted,
save the data in Amazon SageMaker critical system which can process your your ML compute instances. Now how to
validate a model? Let's look into it. So you can evaluate your model using offline or using the historical data. So
first thing is that you can do the offline testing to validate a model. You can do an online testing with the live
data. So if you have a live data coming or realtime streams coming you can validate a model from there as well. You
can validate using a hold out set and also you can validate using the k-fold validation. Now use historical data to
send requests to the model through the Jupyter notebook in Amazon SageMaker for the evaluation. Online testing with live
data deploys multiple models into the endpoints of Amazon SageMaker and directs live traffic to the model for
validation. Validating using a hold out set is part of the data is set aside where which is called hold out set. So
the part of the data is left which is basically called as the hold out set. This data is not used for the model
training. So later when the model is trained with the remaining input data and generalize the data based on what is
learned initially. So whatever the data which is left out will be used for validating a model because we have not
used that data while training a model. The kfold validation is the input data is split into two parts. One part is
called K which is the validation data for testing the model and the other part is K minus one which is used as a
training data. Now based on the input data the machine learning model evaluates the final
output. Now the companies that are using AWS SageMaker one is the ADP. So you must be knowing about ADP, Zelando, Dow
Jones which is um the stock market proquest and the intute. Now let's look into the demo that how we can actually
run the AWS SageMaker. So we'll use the R algorithm and then package the algorithm as a container for building,
training and deploying a model. We are going to use the Jupyter notebook for that for model building, for model
training, for model deployment. And the code for the demo is in the below link. So you can see here that from this link
you can get the code for the demo. Let's try to do a demo on the AWS. Now I would be using a link which is uh
provided by Amazon to build, train and deploy the machine learning model on the SageMaker as you can see on my screen
and in this tutorial uh you would have some steps where you can put those steps and the code Python codes into your AWS
SageMaker Jupyter lab. So in this tutorial you will learn how to use Amazon SageMaker to build, train and
deploy a machine learning model and for that we will use the popular XG boost ML algorithm for this exercise.
So first of all what you need to do is you have to go to the AWS console and there you have to create a notebook
instance. So in this tutorial you will be creating a notebook instance. You will prepare the data, train the model
to learn from the data, deploy the model, evaluate your ML models performance and once all those
activities are done then we'll see how we can actually remove all the resources in order to prevent the extra
costing. Now the first step is we have to enter to the Amazon SageMaker console. So here you can see I'm already
logged in into the SageMaker console. You can click on the services. Search for the SageMaker
here. And here you get the Amazon SageMaker service. Now the next step
is that we have to create a notebook instance. So we will select the notebook instance from the SageMaker service and
then after the notebook instance is selected we'll put a name to our instance and we'll create a new IM role
for that. So let's wait for the SageMaker studio to open. So here you can see the studio is
open and uh you just have to click on the notebook instances and here you have to create a notebook instance. So here
you can see couple of notebook instances have already been created. One of them is in service. So this is the notebook
instance that we are going to use for uh creating the demo model. I'll show you how you can create a notebook instance.
You just have to click on create notebook instance button and put your notebook instance name. So you can put
something like demo- sage maker 987 or you can put it as model. We'll go with notebook instance type as default which
is mlt2.m medium and in the permission and encryptions under the IM role we'll click on create a new IM role. Now why
we are creating a new IM role so that we can allow the SageMaker to access any S3 bucket that has been created on our
account. Just click on create a role and here you would see that the new IM role will be created with the set of
permissions. Then rest of the things we'll keep it as default and then you just have to click on create a notebook
instance. The notebook instance creation takes some time. So you just have to wait for a couple of minutes to get that
in service. We already have uh one of the notebook instance that has been created. So we will be using uh that to
create a demo. Now going back to the steps. So these are the steps that we have already performed. Now once the
notebook instance is created then we have to prepare the data. So in this step we will be using the SageMaker
notebook instance to pre-process the data that would require to train the machine learning model. And for that we
would be opening up the Jupyter notebook and uh then we have to select an environment a kernel environment in the
Jupyter notebook that would be Python 3. So let's follow these steps. Go back to the SageMaker. Click
on the notebook instances. Select the running notebook instance. And here you would select the
open Jupiter lab. Now here you would see that the HMaker would try to open up the Jupyter
notepad and we would be performing all our inputs uh into that Jupyter notebook and uh executing the results there
itself. So just wait for the notebook to open. Now here you can see the Jupyter lab notebook has been open. So I would
be selecting one of the notebook that has been created. So this one. So likewise you can create your own
notebook also. How you can do that? First of all let me select the kernel environment. So I would be
selecting condor python 3 and just click on select. So how you can create your own notebook just have to click on file
click on new and here you can select the notebook. Just name your notebook select the
environment python 3 to run this demo. So I have my notebook open. So in the tabs I would be putting up the Python
codes and I would be executing those codes to get the output directly. So the next step
is to prepare the data, train the ML model and deploy it. We will need to import some libraries and define a few
environment variables in the Jupyter notebook environment. So I would be copying this code which you can see that
would try to import numpy, pandas. These are all required to run the python syntax. So just copy this
code and paste it into your notebook. Right? So once you do that,
execute your code. And here you can see that you get the output which is that it has imported
all the necessary libraries that have been defined in the code. Now the next step is we would create an S3 bucket
into the S3 service and for that you have to copy this Python code just that you have to edit it. So you have to
specify this bucket name that you want to get created. So here I would provide the bucket name which should be unique
should not overlap. So something like SageMaker
dash demo is the name that I have selected for the bucket and now you have to
execute that code. It says that the S3 bucket has been created successfully with the name
sagemaker-demo 9876. So this is something which you can verify. So you can go to the S3 service and there you
can verify whether the bucket has been created or not. Now the next task is that we need to download the data to the
AWS SageMaker instance and load it into the data frame and for that we have to follow this URL. So from this URL which
is build train deploy machine learning model would have a data in the form of bank_cle.csv and this will be deployed
onto our SageMaker instance. We'll copy this code and paste it
here and execute the code. So it says that it has successfully downloaded bank_clean.csv
CSV which is which has the data inside it and that has been loaded into the SageMaker data frame
successfully. Now we have a data to build and train our machine learning model. So what we are going to do we are
going to shuffle the data and we are going to split it one into the training data set and the other one into test
data set. So for the training data set we are going to use 70% of the customers uh that are listed in the CSV file and
30% of the customers in the CSV file data we'll be using it as a test data to train the model. So we'll copy the
following code into a new code cell and then we are going to run that code cell. So I'll just copy it for training
the data so that we can segregate the data model 70% for building the model and 30% for testing the data. So click
on run the execution and here you can see that we got the output successfully. Now we have to train the
model from that data. So how we are going to train that model and for that we'll use SageMaker pre-built XG boost
model which is an algorithm. So you will need to reformat the header and first column of the training data and load the
data from the S3 bucket. So what I'll do is I'll copy this
syntax and paste it in the note cell. So it has the train data. It could train the model. Click on run execution.
Now it is changing the S3 input class which will be renamed to training input because now we are training the model
with the training data. So we just have to wait for some time till it gets executed
completely. Now the next thing is that we need to set up the Amazon SageMaker session to create an instance of the XG
boost model. So here we are going to create the SageMaker session. Say we are going to create an instance of the XT
boost model which is an estimator. So just copy that copy that code and paste it
here. Execute it. And here you can see that it will start. It has uh basically changed the parameter image name to the
image URI in the SageMaker Python SDK v2. Now we'll follow the next step that is
with the data loaded in the XT boost estimator we'll set up train the model using gradient
optimization and uh we'll copy the following code and that would actually start the training of the model. So copy
this code and this would actually start training the model using our input data that we have reserved 70% of that data
that we have reserved for training the model. So just copy that again initiate the execution and it
will start the training job. Now we'll deploy the model and for that I would copy the deploy code put that in
the cell and execute it.
So it says parameter image will be renamed to image URI and using already existing model. So XD boost was uh
deployed already. uh if you have not done that uh if you're doing it the first time so it will initiate another
XT boost instant. So where you can find your XT boost endpoints created uh you just have to scroll down and here under
the inference uh click on the endpoints and you should find uh the XT boost endpoints defined here. So here you can
see that today I have uh created one XT boost uh endpoint and that is uh now in process of creating. So just refresh
it. So it is uh still created. It is going to take some time to get that in service. Now our endpoint is uh in
service state. So now we can use it. So going forward with the next steps uh we'll try to predict whether the
customer in the test data enroll for the bank product or not. For that we are going to copy this code put that in the
Jupiter cell function and execute it. So here it gives you the output uh that it has actually evaluated and the
same output we got in the screenshot uh of the demo as well. Now we are going to evaluate the model performance. So what
we are going to do we are going to get the prediction done. So based on the prediction we can conclude that you
predicted a customer that will enroll for a certificate of deposit accurately for 90% of the customers in the test
data with a precision of 65% for enrolled and 90% which are which haven't enrolled for it. So for that we are
going to copy this code and execute it here in the cell. So if it is predicted correctly that
means our model is working absolutely fine. So here you can say the overall classification rate is
89.5% and uh there is the accurate prediction that has been made by the model and that's what the output we can
see here in the screenshot of a model. So that means our model is absolutely working fine. It has been built,
deployed and trained correctly. Now the next thing is that once you are done with that you terminate your resources
and for that you just have to copy uh this code and put that in the cell function so that the additional
resources and the endpoints and the buckets that have been created by the Jupyter notepad should be uh terminated
so that you would not be incurred with the extra costing. So just execute it and here you would see that it has tried
to it would try to terminate all the additional resources that we have created from the Jupyter notebook.
Today's tutorial is on AWS CloudFront. Let's look into what we have today in the CloudFront. So what's in it for you?
We would be covering up the concept of what is AWS. What was earlier uh before AWS CloudFront after AWS CloudFront?
What were the services that were introduced? how it benefited what do we mean by AWS CloudFront benefits of the
using AWS CloudFront and how AWS CloudFront actually is known as a content delivery service the name of the
companies that are using AWS CloudFront and we would be covering up one live demo now AWS is the Amazon web services
it's a cloud service provider that basically offers a multiple services variety of services such as uh compute
power database storage, networking and other resources so that you can uh create your solutions on the cloud and
help the business grow. Now with AWS you only pay for whatever the services you use. So for example, if you're using a
service for a couple of hours then you pay for only that many hours that you have used that
service before AWS uh CloudFront. So there is an example that we are going to talk and that is uh you must be aware of
an application called Spotify. So when you used to access Spotify and u when you click on it it kept on loading and
uh at the end you used to get the error and the error was that the connection failed and why you received that error
because of a latency issue probably a network error right so how you can solve these kind of a latency issues and that
is also going to these kind of an issues are also going to impact the performance of an application so with the
introduction of AWS CloudFront this problem of loading the application got resolved. So after AWS CloudFront with
the help of AWS CloudFront Spotify gives the facility of updating new features access to million songs that you can
access instantly. So with the use of AWS CloudFront or the latency issues were
solved and successfully you can basically access your application. Now what we mean by AWS CloudFront? So AWS
CloudFront is a globally distributed network that is offered by AWS Amazon Web Services which securely delivers
content to the end users across any geography with a higher transfer speed and an improved or a low latency. Now
what are the benefits of AWS CloudFront? There are multiple benefits. One is the cost effective. So it helps you to do
the cost optimization when you use the CloudFront. It is timesaving so it is implemented easily
and also lot of issues with respect to accessing the application uh with respect to latency and all can be
resolved. Content privacy. So the content is placed to the end users and also to the cloudfront servers in a
secured manner in a secured way. It is highly programmable and you can make the changes amend the changes on the fly and
you can target any location any geography across the globe. Along with that it helps you to get the content
delivered quickly. Now how AWS CloudFront delivers the content? Let's look into the
architecture. So this is a flow and the flow is with respect to how the user is going to get a content from the cloud
front. Now the client first access a website by typing a URL on the browser and in the step one it tries to access
the application. Then the client requests when the website is open the client request for an object to download
such as for example a particular file. Now at that time the DNS routes user request to download that file to AWS
cloudfront. The AWS CloudFront connects to the nearest edge locations. Edge locations are basically the servers
where it ces the files, documents and the web codes. AWS CloudFront connects to its nearest edge location in order to
serve the user the request. At edge location, AWS CloudFront looks for its requested cache file. Once the file is
found, let's say if the file is available in the C of an edge location, AWS CloudFront then sends the file to
the user. Otherwise, if the file is not found in the C memory of an edge location, AWS CloudFront compares the
requirement with the specification and share it with a respected server. That means a web server or a server where the
file is actually available. The server, the web server responds to the edge location by sending the file back to the
cloudfront edge location and then as soon as the AWS cloudfront receives the file, it shares
with the client also adds the file to the c of an edge location for a future reference. This is how the flow of a
cloudfront is. Now the name of the companies that are using the AWS cloudfront. So one of them is uh Jio7
app it is which is a very popular app. So it uses Amazon Cloudfront to deliver 15 pabytes of audio and video to its
subscribers globally which is a huge data. Sky news uh it uses the service in order to unify the content for faster
distribution to subscribers. The discovery communications uh also uses the cloudfront. It uses a service
for delivering API static asset and also the dynamic content and then uh the TV1 EU streaming
Europe uh is basically also uses uh the CloudFront service that helps in improving latency and performance that
results in fastest delivery of content. Now let's look into the demo how to use CloudFront to serve private S3 bucket as
a website. Now I'm going to run a CloudFront distribution demo on the AWS
console and uh we'll basically try to deliver the content from a private S3 bucket and then map that with the domain
name using the root 53 service. So what we need for this demo? We would need a domain URL. We would need uh route 53
service. We would need cloudfront uh we have to create a cloudfront distribution and that will be linked with our S3
bucket the private S3 bucket right and in the S3 bucket we would have uh one HTML file the index html file. So let's
uh move into AWS console. So right now I have opened up the cloudfront distribution and here you can see uh
that couple of distributions have already been created. So what I'm going to do I'm going to create a
distribution. Now there are two types of delivery method for your content. One is the web distribution and the other one
is RTMP. RTMP stands for realtime audio or video distribution. It's basically used for distribution of a media content
or a media file which are available in the S3 bucket. Here we are going to select a
web distribution because primarily we will be using files which uses protocols HTTP or the
HTTPS. So you have to click on get started and in the origin domain name you have to specify the bucket where
your code is available. So I have a bucket already created here. You can do is you have to create a bucket with the
URL name or the domain name which you would be mapping with the route 53 service. So this is a bucket that has
already been created. Let me show you in a new tab. So here you have to go to the storage under the storage uh you have to
select the S3 bucket. Let's open the link in the new tab and let's look into uh how we can create the S3 buckets.
Now here are a couple of buckets already created. I have created one bucket with a domain name that I'm going to use and
map it with the root 53 service. So that is uh basically mapped to a region which is in Ohio. And if you open up this
bucket here you will find an HTML web page. The index html has been already added. Right? So similarly you have to
create a bucket with a domain and an index.html page needs to be uploaded there. Now again we'll go to the cloud
front. We'll try to create a distribution. Just have to click on create
distribution. Select a web delivery method. Select an origin domain which is sunshinearning.in and origin path. You
don't have to specify origin ID is this one. So basically when you define an origin domain name automatically the
origin ID appears. You can customize this origin ID as per your requirement also. So rest of the things primary we
keep it as a default settings only until and unless if we require some customized settings to be done. So let's say if you
have to change the c behavior settings you can do that otherwise we'll keep it as default.
Now in the distribution setting uh you can either select use all edge locations for the best performance. So what does
AWS uh basically do here? It uses all the edge locations um which are associated with AWS across the globe.
Otherwise you can specify based on the regions also. Right? Apart from that if you want to enable firewalls or the
access control list you can specify here. And then um what you need to do is in the default
root object here you have to specify your index.html page which is in the S3
bucket. The distribution state has to be enabled and if you want to use IP version 6 as well you need to enable it.
Click on create distribution. Now you you can see here a distribution has been created. It's in
progress and it is enabled and it takes around 15 to 20 minutes to get that distribution
completed. The reason is that the web codes the web pages will be distributed across all the edge locations across the
globe. So hence it takes time to get that done right let's move on to root 53 service and uh let's create the hosted
zones. So we'll type uh root 53 here scalable DNS and domain name registration. And what we are going to
do here is we are going to map our URL the domains um pointed to the name servers that will be
provided by the root 53. So we have to create a hosted zone. Let's wait for it.
So now the root 53 dashboard is open and uh you can see one hosted zone is already created. So just click on the
hosted zones and in order to point the traffic from the external domains towards the AWS you have to first point
the domains traffic to the uh hosted zone in the route 53. So I'll click on create hosted zone but
before that I will first delete the existing one and then I'll uh create another
record another hosted zone right put your domain name which I put as sunshinearning.in in and it is
acting as a public hosted zone. Rest of the things will be default. Click on create hosted
zone. Now it gives you four name servers and these four name servers has to be updated in the domain. So you have to
update these name servers in a platform from where you have purchased the domain. Right? So this is half of the
work done. Then what you need to do is you have to go and create a records. Now in the records you can
select a routing policy. So right now what we are targeting we are targeting basically that the traffic from the
domain should be pointed directly towards the cloudfront distribution. Hence we are going with a simple routing
right now. Click on next. Here you have to specify the
record sets. So we are going to create the records. Uh just click on default simple
record put here as worldwide web and you have to select an endpoint. So endpoint we are selecting for the
cloudfront distribution. So we have to specify alias for the cloudfront distribution.
Now here we are going to put uh the cloudfront distribution URL and uh then we are going to define the simple report
set. So what we need is we need a cloudfront distribution URL which you can find it uh from the cloudfront
service itself and uh you can find uh the domain name here itself. You just have to copy
that and then paste it here in the distribution and then just uh click on
define simple records again. Click on create records and here you can see the record set has been updated. Now
this domain is basically pointed towards these name servers which are further pointed towards the cloudfront
distribution. Right now the only thing which is left is that within the domain from wherever
you have purchased the domain you should update these phone name servers and then you can see the live traffic coming on
this domain will have an output from the cloudfront distribution. Today's topic is on AWS autoscaling. So this is Akil.
I would be taking up a tutorial on the autoscaling. Let's begin with our tutorial and uh let's look into what we
have in a today's uh session. So I would be covering up uh why we require AWS autoscaling. Uh what is uh AWS
autoscaling? What are the benefits uh of uh using the scaling service? How this autoscaling works? The different scaling
plans we have. Uh what is the difference between the snapshot and the AMI? What is a load balancer and how
many types of load balancers we have? and along with that I would be covering up a real live demo on the AWS. Let's
begin with why we require AWS autoscaling. Now before AWS cloud scaling there was a question in the mind
of enterprises that uh they were spending lot of uh money on the purchase of the infrastructure. If they have to
set up some kind of a solution so they have to purchase an infrastructure and a one-time cost was required. So that was
a burden for them in terms of procuring a server, hardware, software and then having a team of experts to manage all
those infrastructure. So they used to think that no longer they require these resources uh if there was a
costefficient solution for their project. That was the project manager used to think. Now after the AWS cloud
scaling that was introduced uh automatically the autoscaling maintains the application performance based on the
user requirements at the lowest possible price. So what does the autoscaling does is uh that whenever there is a
scalability required it manages it automatically and hence the cost optimization became
possible. Now what is AWS autoscaling? Let's look into deep. So AWS autoscaling is a service that helps users to monitor
their applications and the servers and automatically adjust the capacity of uh their infrastructure to maintain the
steadiness. So they can increase the capacity, they can even decrease the capacity also for the cost optimization
and also predictable performance at the lowest possible cost. Now what are the benefits of
autoscaling? It gave the better fault tolerance applications. uh you can get the servers created and you can have a
clone copy of the servers so that you don't have to deploy the applications again and again better cost management
because uh the scalability is decided by the AWS automatically based on some threshold parameters. It was a reliable
uh service and uh whenever the scaling is created or initiated you can get the notifications uh onto your mail ids or
to your cell phones. uh scalability as I mentioned uh is always there in the autoscaling. It can scale up, it can
scale down as well and it has the flexibility the flexibility in terms of whenever you want to schedule it, if you
want to stop it, if you want to keep size of the servers at a fixed number uh you can always make the changes on the
fly and the better availability. Now with the use of the autoscaling we come around with uh the terminology
called snapshot and the AMI. Let's look into the difference between the snapshots and the AMI. Uh snapshots
versus AMI. Uh so in a company there was one of the employee that was facing an issue with launching the virtual
machines. So he asked his colleague a question is it possible to launch multiple virtual machines with a minimum
amount of time because it takes a lot of time in terms of creating the virtual machines. The other colleague said that
yes it is possible to launch multiple EC2 instance and that can be done at a lesser time and with the same
configuration and this can be done either you use a snapshot or the AMI on the
AWS. Then the colleague said that what are the differences between the snapshot and
AMI. Uh let's look into the difference now. uh the snapshots basically kind of a backup of a single EBS volume which is
just like a virtual hard drive that is attached to the EC2 instance whereas the AMI it is basically
used as a backup of an EC2 instance only. The snapshots opts for this when the instance contain multiple static EBS
volume. when you opt for the snapshot whenever the instance contains multiple statics EBS
volumes AMI this is widely used to replace the failed uh EC2 instance in the snapshots here you pay only for the
storage of the modified data whereas uh with the AMI you pay only for the storage that you use
uh the snapshots are non bootable images on ABS volume whereas AMI or bootable images on the EC2
instance. However, creating an AMI image will also create the EBS snapshots. Now, how does AWS autoscaling
work? Let's look into it. So, for the AWS autoscaling to work, you have to configure single unified scaling policy
for application resource. And this scaling policy with that you can explore the applications
also and then select the service you want to scale also for the optimization select do you want to optimize the cost
or do you want to optimize the performance and then keep track of scaling by monitoring or getting the
notifications. Now what are the different scaling plans we have? So in the autoscaling a scaling plan basically
helps a user to configure a set of instructions for scaling based on the particular software
requirement. The scaling strategy basically guides the service of AWS autoscaling on how to optimize resources
in a particular application. So it's basically uh kind of uh the parameters that you set it up so that how the
resource optimization can be achieved uh in the autoscaling uh with the scaling
strategies users can create their own strategy based on their required metrics and thresholds and this can be changed
on the fly as well. What are the two types of scaling policies we have? So there are basically dynamic scaling and
the predictive scaling. Uh now what is dynamic scaling? uh it basically guides the service of AWS autoscaling on how to
optimize the resources and it is helpful in optimizing resources for availability and particular price. Now with scaling
strategies users can create their plan based on the required metrics and thresholds. So a metric can be like
let's say a network in network out or it can be a CPU utilization memory utilization
likewise. Now in the predictive scaling its objective is to predict future workload based on daily and weekly
trends and regular forecast future network traffic. So it is kind of a forecast that happens based on the
previous past experiences. It uses a machine learning technique for analyzing that network traffic and this scaling is
like how weather forecast works right. It provides schedule scaling actions to ensure the resource capacity is
available for application requirement. Now with the autoscaling you would need the load balancers also
because if there are multiple instances that are created then you would need a load balancer to distribute uh the load
to those instances. So let's understand what do we mean by a load balancer. A load balancer basically acts as a
reverse proxy and it is responsible for distributing the network or the application traffic across multiple
servers. With the help of a load balancer, you can achieve a reliability. You can achieve a fall tolerance of an
application. That is basically it increases the fall tolerance and the reliability. So for example when there
is a high network traffic that is coming to your application and if that much traffic comes to your application to the
instances your instances may crash. So how you can avoid that situation? So you need to manage the network traffic that
is coming to your instances and that can be done with the load balancer. So thanks to the AWS load balancers which
helps in distributing network traffic across backend servers in a way that it increases performance of an application.
Here in the image you can see the traffic coming from a different resources landing onto the EC2 instance
and the load balancer is actually distributing that traffic to all the three instances hence managing the
network traffic quite properly. Now what are the types of load balancers we have? There are three types
of load balancers uh on the AWS. One is the classic load balancer, second is the application load balancer and the third
one is the network load balancer. Let's look into what we have in the classic load balancer. So the classic load
balancer is the most basic form of load balancing and uh we call it as a primitive load balancer also and it is
widely used for the EC2 instances. It is based on the IP address and the TCP port and it routes network traffic between
end users as well as in between the backend servers and uh it does not support
hostbased routing and it results in low efficiency of resources. Let's look into what we have in the application load
balancer. Uh this is one of the advanced forms of load balancing. It performs the task on the application level in the OSI
model. Uh it is used when there are HTTP and HTTPS traffic routing is required and also it supports the host based and
pathbased routing and performs well with the microservices or the backend applications. The network load balancer
performs the task at layer four of the connection level in the OSI model. Uh the prime role of the network load
balancer is to route the TCP traffic and it can manage a massive amount of traffic and is also suitable to manage
the low latencies. Let's look into the demo uh and see how practically we can create
the autoscaling. Hi guys, let's look into the demo for how we can create an
autoscaling on the AWS console. So right now I'm logged into the AWS console and I am in the Mumbai region. uh what you
need to do is you have to go to the compute section and under that click on the EC2
service. Let's wait for the EC2 service to come. Now just scroll
down and under the load balancing uh there is an option called autoscaling. So there first you have to create a
launch configuration and then after that you have to create the autoscaling groups.
So click on launch configuration and then uh you have to click on create launch
configurations. So click on create launch configuration. Now this launch
configuration is basically uh the set of parameters that you define while launching an autoscaling so that this
uniformity is maintained with all the instances. So that includes let's say if you select a windows OS or a Linux OS
that particular type of an operating system will be implemented in all the instances that will be part of an
autoscaling. So there are certain set of parameters that we have to specify during the launch configuration so that
we can have a uniformity in terms of launching the servers. So here uh I would select an Amazon Linux AMI and
then I would select the type of a server which will be T2.mic. Click on configure details. Put the name
to the launch configuration. Let's say we put it as a demo. And uh the rest of the things we'll keep it default. Click
on add storage. Uh since it's a Linux AMI, we can go with the 8GB storage. That should be fine. Click on configure
security group. Uh let's create a new security group which has the SSH port open. And that is open for anywhere
which is uh basically source IPv4 and IPv6 IPs. Any IP would be able to access that. Click on review. Uh just review
your launch configuration. If you want to make changes, you can do that. Otherwise, uh click on create a launch
configuration. Uh you would need the key pair and this key pair will be a unique key pair which will be used with all the
instances that are part of the autoscaling group. So we can select an existing key pair if you have that
otherwise you can create a new key pair. Uh so I have an existing key pair. I'll go with that. Acknowledge it and click
on create launch function. Now uh we have successfully uh launched the configuration of an
autoscaling. The next thing is to create an autoscaling group. So click on create an autoscaling group using this launch
configuration. Uh put a group name. Let's say we put something like test and the group size to start with uh it says
one instance. So that means at least a single instance will always be running and it will be initiated and running 24
+ 7 till the autoscaling is available. You can increase the size of the minimum base instances also. Let's say you can
change it to two also. So you would get at least two servers running all the time. So we'll go with the one instance.
uh the network would be the VPC default and uh in the VPC particular region we can uh select the availability zones. So
let's say if I select availability zone 1 A and then availability zone 1B. So how the instances will be launched? So
one instance will be launched in 1 A, the other one in the 1B, the third one in the 1 A, fourth one in the 1B.
Likewise it will uh be equally spreaded among the availability zones. Next part is to configure the sailing
policies. So click on it. Uh if you want to keep this group at its initial size, let's say if you want to go with only a
single instance or two instances and you don't want the scaling to progress, you can put it uh keep this group at its
initial size. So this is basically a way to hold the scaling. But we'll use the scaling policies to adjust the capacity
of this group. So click on it and we would scale between let's say minimum one instance that we have and we'll
scale it between one to four instances and u what condition on what basis these instances will be scaled up
or scaled down would be defined in the scale group size. So the scaling policies you can implement uh based on a
scale group size or using the simple scaling policies using the steps. So in the scale group size uh you have uh
certain metrics uh you can use average CPU utilization. You can define a metric related to average network in average
network out or the load balancer request counts per target. And if you create the simple scaling policies using steps then
you need to create the alarms and there you can get some more metrics that you can add up as a parameter for the
autoscaling. Let's go with the scaling group size. uh let's go with the metric type as average CPU utilization and the
target value here you have to specify what would be the threshold that when the instance CP utilization is crossed
then a new instance should be initiated so you can put a reasonable threshold for that let's say we put something like
85% and whenever the instance CPU utilization is crossed 85% threshold uh you will see that there will be a new
instance created let's go to the next uh configure notifications And here you can add notifications. Uh
so let's say if there is a new instance that is initiated and you want to basically be notified. So you can get
notifications over your email ids or you can get it on the cell phones. So for that for adding the notification you
would need the SNS service that is called as a simple notification service and uh you have to create a topic there.
You have to subscribe for the topic using your email id and then you should get the notifications.
uh click on configure tags. Uh the tags are not mandatory. You can basically put a tag uh let's say if you want to
identify the instance for purpose it was created. Otherwise you can leave it blank also. Click on review and uh
review your scaling policies notification tags as well as the scaling group details. Click on create
autoscaling group. And here you go your scaling has been launched. Click on close.
and you should get at least a single instance initiated automatically by the autoscaling. So let's wait for the
details to appear. So here you can see our launch configuration name demo autoscaling group name
test minimum instance we want one the maximum instances we want four we have selected two availability
zones AP south 1 AP south 1 B and uh the instance one has been initiated and if you want to verify where exactly
this instance has been initiated just click on the instances here and here you will see that uh how single instance has
been initiated that is in service and that has been initiated in AP south 1B. Now once uh the threshold of this
instance crosses 85% that is what we have defined in the scaling policies then you should see that
another instance will be initiated. So likewise uh this is basically uh I have created steps to initiate a scaling
policy that means to increase the number of servers whenever the threshold crosses. Likewise here itself you can
add another policy to scale down the resources in case if the CPU utilization goes to a normal value. Today we are
going to discuss about Amazon Redshift uh which is one of the data warehouse service uh on the AWS. So let's begin
with Amazon Redshift and let's see what we have for today's session. So what's in it for you today? We'll see what is
AWS uh why we require Amazon Redshift. What do we mean by Amazon Redshift? The advantages of Amazon Redshift, uh the
architecture of Amazon Redshift, some of the additional concepts associated with the Redshift and the companies that are
using the uh Amazon Redshift and finally uh we'll cover up one demo which will show you the practical example that how
you can actually use the Redshift service. Now, what is AWS? Uh as we know that AWS stands for Amazon Web Service.
uh it's one of the largest cloud providers in the market and it's basically a secure cloud service
platform provided from the Amazon also uh on the AWS you can create and deploy the applications um using the AWS
service along with that uh you can access the services provided by the AWS over the public network that is over the
internet they are accessible plus you pay only whatever the service you use for now let's understand why maybe
require Amazon Redshift. So earlier before Amazon Redshift what used to happen that the people used to or the
developers used to fetch the data from the data warehouse. So data warehouse is basically a terminology which is
basically represents the collection of the data. So a repository where the data is stored is generally called as a data
warehouse. Now fetching data from the data warehouse was a complicated task because might be a possibility that the
developer is located at a different geography and the data data warehouse is uh at a different location and probably
there is not that much network connectivity or some networking challenges, internet connectivity
challenges, security challenges might be and lot of maintenance was required uh to manage uh the data warehouses. So
what were the cons of uh the traditional data warehouse services? uh it was time consuming to download or get the data
from the data warehouse. Maintenance cost was high and uh there was uh the possibility of loss of information in
between the downloading of the data and uh the data rigidity was an issue. Now how these problems could overcome and
this was uh basically solved with the introduction of Amazon Redshift over the cloud platform. Now we say that Amazon
Redshift has solved traditional data warehouse uh problems that the developers were facing. But how what is
Amazon Redshift actually is? So what is Amazon Redshift? It is one of the services over the AWS Amazon web
services uh which is called as a data warehouse service. So Amazon Redshift is a cloud-based service or a data
warehouse service that is primarily used for collecting and storing the large chunk of data. So it also helps you to
get or extract the data, analyze the data using some of the BI tools. So business intelligence tools you can use
and get the data from the red shift and process that and hence it simplifies the process of handling the large scale data
sets. So this is the symbol for the Amazon red shift over the AWS. Now let's discuss about one of the use case. So
DNA is basically a telecommunication company and they were facing an issue with managing their website and also the
Amazon S3 data which led down to slow process of their applications. Now how could they overcome this problem? Let's
see that. So they overcome this issue by using the Amazon red shift and all the company noticed that there was a 52%
increase in the application performance. Now did you know that Amazon red shift is uh basically cost less to operate
than any other cloud data warehouse service u available on the cloud computing platforms and also the
performance of an Amazon red shift is the fastest data warehouse we can say that that is available as of now. So in
both cases one is that it saves the cost as compared to the traditional data warehouses and also the performance of
this red shift service or a data warehouse service the fastest available on the cloud platforms and more than
15,000 customers primarily presently they are using the Amazon red shift service. Now let's understand some of
the advantages of Amazon Redshift. First of all as we saw that it is one of the fastest available data warehouse
service. So it has the high performance. Second is it is a lowc cost service. So you can have a large scale of data
warehouse or the databases combined in a data warehouse at a very low cost. So whatever you use you pay for that only
scalability. Now in case if you wanted to increase the nodes of the databases in your red shift you can actually
increase that based on your requirement and that is on the fly. So you don't have to wait for the procurement of any
kind of a hardware or the infrastructure. It is whenever you require you can scale up or scale down
the resources. So the scalability is again one of the advantage of the Amazon red shift availability since it's
available across multiple availability zones. So it makes this service as a highly available service security. So
whenever you create whenever you access red shift uh you create a clusters in the red shift and the clusters are
created in the you can define a specific virtual private cloud for your cluster and you can create your own security
groups and attach it to your cluster. So you can design the security parameters based on your requirement and you can
get your data warehouse or the data items in a secured place. Flexibility and uh you can remove the clusters, you
can create another clusters, uh if you are deleting a cluster, you can take a snapshot of it and you can move those
snapshots to different regions. So that much flexibility is available on the AWS for the service. And the other advantage
is that it is basically a very simple way to do a database migration. So if you're planning that you wanted to
migrate your databases from the traditional data center over the cloud on the red shift it is basically uh very
simple to do a database migration. You can have some of the inbuilt tools available on the AWS access you can
connect them with your traditional data center and get that data migrated directly to the red shift. Now let's
understand the architecture of the Amazon red shift. So architecture of an Amazon red shift is basically it
combines of a cluster and that we call it as a data warehouse cluster. In this picture you can see that this is a data
warehouse cluster and this is a representation of a Amazon red shift. So it has uh some of the compute nodes
which does the data processing and a leader node which gives the instructions to these compute nodes and also the
leader node basically manages the client applications that require the data from the red shift. So let's understand about
the components of the red shift. The client application of Amazon Redshift basically interact with the leader node
using JDBC or the ODBC. Now what is JDBC? It's a Java database connectivity and the ODBC stands for open database
connectivity. The Amazon red shift service using a JDBC connector can monitor the connections from the other
client applications. So the leader node can actually have a check on the client applications using the JDBC connections.
Whereas the OBBC allows a leader node to have a direct interaction or to have a live interaction with the Amazon red
shift. So OBBC allows a user to interact with live data of Amazon Redshift. So it has a direct connectivity direct access
of the applications as well as the leader node can get the information from the compute nodes. Now what are these
compute nodes? uh these are basically kind of a databases which does the processing. So Amazon red shift has a
set of computing resources which we call it as a nodes and the nodes when they are combined together they are called it
as a clusters. Now a cluster a set of computing resources which are called as nodes and this gathers into a group
which we call it as a data warehouse cluster. So you can have a compute node starting from one to n number of nodes
and that's why we call that the red shift is a scalable service because we can scale up the compute nodes whenever
we require. Now the data warehouse cluster or the each cluster has one or more databases in the form of a nodes.
Now what is a leader node? This node basically manages the interaction between the client application and the
compute node. So it acts as a bridge between the client application and the compute nodes. Also it analyzes and
develop designs in order to carry out any kind of a database operations. So leader node basically sends out the
instructions to the compute nodes basically perform or execute that instructions and give that output to the
leader node. So that is what we are going to see in the next slide that the leader node runs the program and assign
the code to individual compute nodes and the compute nodes execute the program and share the result back to the leader
node for the final aggregation and then it is delivered to the client application for analytics or whatever
the client application is created for. So compute nodes are basically categorized into slices and each node
slice is alerted with specific memory space or you can say a storage space where the data is processed. These node
slices works in parallel in order to finish their work and hence when we talk about a red shift as a fast a faster
processing capability as compared to other data warehouses or traditional data warehouses. This is because that
these node slices work in a parallel operation that makes it more faster. Now the additional concept associated with
Amazon red shift is there are two additional concepts associated with the red shift. One is called as the column
storage and the other one is called as the compression. Let's see what is the column storage. As the name suggest
column storage is basically kind of a data storage in the form of a column so that whenever we run a query it becomes
easier to pull out the data from the columns. So column storage is an essential factor in optimizing query
performance and resulting in quicker output. So one of the examples uh are mentioned here. So below example show
how database tables store record into disk block by row. So here you can see that if uh we wanted to pull out some
kind of an information based on the city, address, age, we can basically create a filter and from there we can
put out the details that we require and that is going to fetch out the details based on the column storage. So that
makes data more structured, more streamlined and it becomes very easier to run a query and get that output. Now
the compression is basically to save the column storage we can use a compression as an attribute. So compression is a
column level operation which decreases the storage requirement and hence it improves the query performance and this
is one of the syntax for the column compression. Now the companies that are using Amazon red shirt one is LYA the
other one is Equinox the third one is the FISA which is one of the famous pharmaceuticals company McDonald's one
of the burger chains across the globe and Philillips uh it's an electronic company so these are one of the biggest
companies that are basically relying and they are putting their data on the Redshift data warehouse service. Now in
another video we'll see the demo for using the Amazon Redshift. Let's look into the Amazon Redshift demo. So these
are the steps that we need to follow for creating the Amazon Redshift cluster and uh in this demo what we'll be doing is
that we'll be creating an IM role for the red shift so that the red shift can call the services and specifically we'll
be using the S3 service. So the role that we'll be creating we'll be giving the permission to redshift to have an
access of an S3 in the readonly format. So in the step one what we require we'll check the prerequisites and what you
need to have is the AWS credentials. uh if you don't have that you need to create your own credentials and you can
use uh your credit and the debit card and then in the step two we'll proceed with the IM role for the Amazon red
shift. Once the role is created we'll launch a sample Amazon Redshift cluster mentioned in the step three and then
we'll assign a VPC security groups to our cluster. Now you can create it in the default VPC also. You can create a
default security groups also otherwise you can customize the security groups based on your requirement. Now to
connect to the sample cluster you need to run the queries and you can connect to your cluster and run queries on the
AWS management console query editor which you will find it in the red shift only or if you use the query editor you
don't have to download and set up a SQL client application separately and in the step six what you can do is you can copy
the data from the S3 and upload that in the red shift because the red shift would have an access uh a readonly
access uh for the S3 as that will be created in the IM role. So let's uh see how we can actually use the red shift uh
on the AWS. So I am already logged in into my account. I am in North Virginia region. I'll search for red shift
service and here I find Amazon red shift. So just click on it. Let's wait for the red shift to come. Now this is a
red shift dashboard and from here itself you have to run the cluster. So to launch a cluster you just have to click
on this launch cluster and once the cluster is created and if you wanted to run queries you can open query editor or
you can basically create queries and access the data from the red shift. So that's what it was mentioned in the
steps also that you don't require a separate SQL client application to get the queries run on the data warehouse.
Now before creating a cluster we need to create the role. So what we'll do is we'll click on the services and we'll
move to IM role section. So IM role I can find here under the security identity and compliance. So just click
on the identity access management and then click on create roles. So let's wait for IM page to open. So here in the
IM dashboard you just have to click on the roles. I already have uh the role created. So what I'll do is I'll delete
this role and I'll create it separately. So just click on create role and under the AWS services you have to select for
the red shift because now the red shift will be calling the other services and that's why we are creating the role. Now
which other services that the red shift will be having an access of? S3. Why? Because we'll be putting up the data on
the S3 and that is something which needs to be uploaded on the red shift. So we'll just search for the red shift
service and uh we can find it here. So just click on it and then click on redshift customizable in the use case.
Now click on next permissions and here in the permissions give the access to this role. assign the permissions to
this role in the form of an S3 readonly access. So you can search here for the S3 also. Let's wait for the policies to
come in. Here it is. Let's type S3. And here we can find Amazon S3 readonly access. So just click on it and assign
the permissions to this role. Tags you can leave them blank. Click on next. Review. Put a name to your role. Let's
put my red shift role and click on a create role. Now you can see that your role has been created. Now the next step
is that we'll move to Redshift service and we'll create one cluster. So click on the services, click on Amazon
Redshift. You can find that in the history section since we browsed it uh just now. And from here we are going to
create a sample cluster. Now to launch a cluster, you just have to click on launch this cluster. Whatever the
uncompressed data size you want in the form of a gigabyte, terabyte or pabyte, you can select that. And let's say if
you select in the form of GB, how much GB memory you want, you can define it here as well. This also gives you the
information about the costing on demand is basically pay as you use. So they are going to charge you $.5 per hour for
using the two node slicers. So let's click on launch this cluster and this will be a DC2.large kind of an instance
that will be given to you. It would be in the form of a solid state drive SSDs uh which is one of the fastest way of
storing the information. And uh the nodes two are mentioned by default. That means there will be two node slices that
will be created in a cluster. You can increase them also. Let's say if I put three node slices. So it is going to
give us 3 into.16 DB per node storage. Now here you have to define the master username password for your red shift
cluster. And you have to follow the password instructions. So I would put a password to this cluster. And if it
accepts that means it does not give you any kind of a warning otherwise it is going to tell you about you have to use
the asky characters and all and here you have to assign this cluster the role that we created recently. So in the
available IM roles you just have to click on my red shift role and then you have to launch the cluster. If you
wanted to change uh something in with respect to the default settings, let's say if you wanted to change the VPC from
default VPC to your custom VPC and you wanted to change the default security groups to your own security groups, so
you can switch to advanced settings and do that modification. Now let's launch the cluster and here you can see the
Redshift cluster is being created. Now if you wanted to run queries on this red shift cluster so you don't require a
separate SQL client you just have to follow uh the simple steps to run a query editor and the query editor you
will find it on the dashboard. So let's click on the cluster and here you would see that the red shift cluster would be
created with the three nodes in the US east 1B availability zone. So we have created the red shift cluster uh in the
Ohio region and now what we'll do is we'll uh see how we can create the tables inside the red shift and uh we'll
see how we can use the copy command so that we can directly move the data uploaded on the S3 bucket to the red
shift database tables and then we'll query the results of a table as well. So how we can do that? First of all, after
creating the Redshift cluster, we have to install SQL Workbench/J. Uh this is not MySQL which
is managed by Oracle and you can find this u on the Google. You can download it from there. And uh then you have to
connect this client with the Redshift database. How you can do? Click on file, click on connect window. And after
connecting a window, uh you have to paste the URL which is a JDBC driver. this driver link you can find it onto
the AWS console. So if you open up a Redshift cluster there you would find the JDBC driver link. Let's wait for it.
So this is our cluster created. Let's open it. And here you can find this JDBC URL. And also make sure that in the
security groups of a red shift you have the port 5439 open for the traffic incoming traffic. You also need to have
the Amazon Redshift driver and this is the link where you can download the driver and specify the path. Once you
are done with that, you provide the username and the password that you created while creating the red shift
cluster. Click on okay. So this connects with the database and now the database connection is almost completed. Now what
we will be doing in the SQL workbench, we'll be first creating the sales table and then in the sales table we'll be
adding up the entries copied from the S3 bucket and then move it to the red shift database and after that we'll query the
results in the sales table. Now whatever the values you are creating in the table the same values needs to be in the data
file and uh I have taken up this sample data file from this link which is docs.awws.mamazon.com redshift sample
database creation and here you can find a download file ticket db.zip zip file. This folder has basically multiple uh
data files, sample data files which you can actually use it uh to practice uh uploading the data on the Redshift
cluster. So I have extracted one of the files uh from this folder and then I have uploaded that file in the S3
bucket. Now we'll move into the S3 bucket. Let's look for the file that has been uploaded on the S3 bucket. So this
is the bucket sample and sales tab.ext is the file that I have uploaded. Uh this has the entries data entries that
will be uploaded using a copy command onto the red shift cluster. Now after executing after putting up uh the
command for creating up the table then uh we'll use a copy command and uh copy command we have to define the table
name. The table name is sales and we have to define the path from where the data would be copied over to the sales
table in the redshift. Now path is the S3 bucket and this is the red shift bucket sample and it has to look for the
data inside the sales tab.ext file. Also we have to define the role ARN that was created previously. And once it is done
then the third step is to query the results inside the sales table to check whether our data has been uploaded
correctly on the table or not. Now what we'll do is we'll execute all these three syntax. Uh it gives us the error
because we have to connect it again to the database. Let's wait for it. Execute it. It's again gives us the error. Let's
look into the name of the bucket. It's redshift bucket sample. So we have two T's mentioned here, right? Let's connect
with the database again and now execute it. So table sales created and we got the error the specified bucket does not
exist. Uh red shift bucket sample. Let's view the bucket name. Red shift bucket sample. Let's copy that. Put it here.
Connect to the window. Connect back to the database. Right. And now execute it. So table sales created. uh the data in
the table has been copied from the S3 bucket uh to sales_tab.ext to the red shift and then the query of the results
uh the results um from the table has been quered. Back in the 2000s, organizations were utterly dependent on
purchase servers for their IT infrastructure. These servers not only came with limited functionality but were
also very expensive. The traditional way of storing and managing data was costly and complicated. It required a massive
amount of hardware and complex software to run. All these issues were resolved using cloud computing rather than owning
their computing infrastructures. Organizations were able to scale up or scale down resources based on the
requirement. Companies providing these computing services are called as cloud providers. According to Corner, the two
popular cloud providers in the cloud market are Amazon AWS and Microsoft Azure. Now, let's take a step back and
talk about a question. What should an individual learn in the year 2021, Amazon Web Services or Microsoft
Azure? Though this question is tricky, we need to understand which cloud provider is the best option to choose.
Now let's take a brief look at both of these cloud providers. Amazon Web Services is an
evolving cloud computing platform that offers various services such as compute power, storage, database, content
delivery, and other resources to help business scale and grow. AWS services are provided based on a subscription
basis. Similarly, Microsoft Azure provides a set of cloud services to design, create, and monitor applications
on a network. Talking about the general features, Amazon Web Services was established in the year 2006. Today, it
holds a market share of 31.7%. Whereas, Microsoft Azure was started in the year 2010 and now has a
market share up to 16.8%. Amazon AWS is the world's most broadly adopted cloud platform. It offers up to
165 services globally. These services are used by millions of companies like Spotify, Netflix, Facebook, BBC, etc.
Talking about Netflix, the company moved its IT operations to AWS in the year 2009. AWS was able to solve the
company's scalability and data management issues. Azure offers several services in various categories including
artificial intelligence, analytics, containers, IoT and many more. These services are used by many famous
companies like Accenture, LinkedIn, Stack, eBay and Samsung. Accenture chose Microsoft Azure to boost
productivity and provide better security to its employees. The AWS pricing model is
based on an hourly basis. With AWS, every individual pays only for the services that are required for his work.
Every person only pays for the services he consumes. Azure pricing is charged on a per minute basis. The payment system
is very simple and doesn't involve any long-term contracts. Microsoft Azure is free of
upfront costs and cancellation fees and only users pay for the resources they consume. With Azure and Amazon AWS,
there are no additional costs or termination fees. Moving on to the next factor, let's talk about availability
zones. Availability zones are isolated data centers with independent power and networking. These zones protect
applications and files from data center failures. In total, AWS has nearly 77 availability zones within 24 geographic
regions around the world. The company plans to establish nine more AWS availability zones and three more AWS
regions in Indonesia, Japan, and Spain. On the other hand, Microsoft Azure has many isolated availability zones. These
zones are located within an Azure region and they provide fast network connectivity. Azure has 44 availability
zones around the world. The company is planning to launch 12 more zones. Now after discussing the
availability zones, let's talk about the services. The services that I am going to compare here come under the following
domains. Compute, storage, database, security, and finally networking. AWS and Azure provide a wide variety of
services with low migration cost so that every individual can shift his or her current traditional infrastructure to
cloud platform very easily. AWS EC2 is a suitable example of compute services. EC2 is a web service that aims to make
life easier for developers by providing secure and resizable compute capacity in the cloud. It can be integrated to
several other services like S3, AM, cloud formation, etc. Azure has a small service called
Azure virtual machine image. It helps the developers to develop and design identical virtual machines in the matter
of minutes. Azure virtual machines are becoming more popular among IT
infrastructures. These machines let an individual create and use virtual machines in the cloud as I
AS or infrastructure as a service. Virtual machines can be created using Azure portal, Azure PowerShell and ARM
templates, Azure CLI or client software development kits. Next comes Amazon S3. It is the best example for storage
service. Simple storage service is an object storage service designed to store and recover any information from
anywhere via the internet. Amazon S3 facilitates storage through a web service interface. The service provides
99.9% durability and 99.9% availability of objects. It has the capacity of storing computer files up to 5 terabytes
in size. On the other hand, Azure provides a similar service called blob storage service. This service provides a
large amount of storage and is highly scalable. It stores an object in the tires depending on how often the data is
being accessed. Blob storage supports widely used frameworks like Java,Net, Python and NodeJS. It is the
only cloud storage service in Microsoft Azure that offers a premium SSDbased object storage tire for low latency
networks. AWS provides a secure cloud platform where every individual can manage and deploy their applications.
Compared to the on- premises environment, AWS security offers a high level of data protection at a lower
cost. For security purposes, AM is a suitable service. AWS identity and access management is a cloud service
that controls and manages AWS resources securely. It has no upfront cost also gives a facility to create and maintain
services within a specific set of users on your AWS resources. In Microsoft Azure, Azure Active Directory or in
short also called as Azure AD is a web service which provides MFA multiffactor authentication features to users in
order to protect their data from cyber attacks. Next is Amazon RDS. It is a web service designed to simplify the setup
operation and scaling of a relational database. It is simple to set up, use and scale when required by users. It
provides inexpensive and resizable capacity and automates several tasks such as hardware provisioning, database
setup, patching and backups. On the other side, the Azure SQL database automatically updates and backups data
in order to focus on the development of applications. Not just that, it also manages database services and response
to client requirements. Client requirements such as scalability of a service, data backup
and high availability of database. Similarly, these cloud providers have plenty of services that help users build
and deploy their applications effectively. Moving on to the next topic, let us focus on integrating AWS
and Azure with open source tools. Now we will explain how to build a pipeline using genkins along with one of the
services of Amazon web services that is the EC2. In this integration process, we have a sample Python application called
as a docker file. This docker file will consist of a Python base image. The docker file builds a pipeline in order
to create a docker image. Upon pushing the code to the repository, later the new image is used to start a new AWS
service on an ECS cluster. Now let's have a look at how the integration of opensource tools happen in Azure. In
your Azure cloud application, DevOps tools like Docker, Maven, and Genkins can be integrated. In case you want to
integrate the Genkins pipeline, go to project and create a subscription followed by Genkins. You can specify the
required filters based on your requirement. Then choose your endpoint for Azure DevOps to communicate with
Genkins. So that was all about Amazon AWS and Microsoft Azure. Both AWS and Azure are honored to be on top of all
other cloud providers. Both provide all necessary features like scalability, high availability and security. But the
moment I hear about choosing the best one, I have to say that all that depends upon the individual business needs. Both
cloud providers are the certified ones that hold immense value in the current cloud service market. Awesome viewers,
come let's leap into the world of cloud computing. Whether you're a tech enthusiast, a student, or a professional
looking to upskill the cloud technology offers endless opportunities for innovation and growth. Did you know that
the global cloud computing market is projected to reach a staggering $832.1 billion by 2025? We all know that
projects are essential for your career in cloud because they allow you to apply theoretical knowledge to real world
scenarios, gaining practical skills and experience. Through your projects, you can showcase your skills and depth of
your knowledge. Projects are often the key to impress your employers. In this video, we are going to unveil the top
cloud projects tailor made for beginners, empowering you to kickstart your journey into the clouds with
confidence. The first project in our list is website hosting. Here you will build a website and host your website on
the cloud. The steps are sign up for a cloud hosting service like AWS, Google Cloud or
Azure. Upload your website files to the cloud storage provided by the hosting service. Then configure DNS settings to
point your domain name to the cloud server. After that, set up SSLTS certificates for secure HTTPS
connections. At last, monitor website performance and scalability using cloud-based analytics tools. Next, we
have expense tracker. In this project, you have to develop an app to track expenses and
manage finances using cloud storage for data synchronization. Now, let's see the steps. Choose a cloud database service
like Firebase or AWS Dynamo DB for storing expense data. Now design and develop the expense tracker app
interface and functionality. Implement user authentication and authorization for
secure access. Then set up realtime data synchronization between devices using
cloud database SDKs. after which test the app thoroughly across platforms to ensure data
integrity and usability. The third one is virtual classroom platform. Here you create an
online learning platform with live lectures and interactive features and host it on the cloud. The steps to do
this include select a cloud-based video conferencing platform like Zoom or Microsoft Teams for hosting live
lectures. Integrate additional features like chat, file sharing, and quizzes using cloud-based APIs or SDKs. Set up
user authentication and access controls to ensure a secure learning environment. After which you will have
to customize your platform to match the branding and requirements of your educational institution. Train educators
and students on using the virtual classroom tools effectively. Next, we have document collaboration platform
with Microsoft. Here you can build a document management and collaboration platform using Microsoft SharePoint on
the cloud. Let's see the steps. Set up a SharePoint site on Microsoft 365 or SharePoint online. Create document
libraries for storing and organizing files with metadata and content types. Configure access controls and
permissions to manage document security and sharing. Customize a platform with workflows, forms, and templates to
streamline document management processes. Train users on how to use SharePoint for document collaboration
and version control. The next is health tracker app with Google Fit API. In this project, you will develop a health
tracking app that syncs data from fitness devices using the Google Fit API. The steps are obtain API
credentials and set up authentication for accessing the Google Fit API. Design the app interface and
functionality for tracking user health metrics like activity, sleep, and heart rate. Integrate the Google Fit API into
the app to retrieve and display user health data. Implement features for goal setting, progress tracking, and
personalized recommendations based on health data. Test the app on different devices and ensure compliance with
privacy and security regulations. The next is image recognition with Google Cloud
Vision. In this project, you have to build an application that identifies objects and text in images using Google
Cloud Vision API. Steps are set up a Google Cloud Platform project and enable the cloud vision API. Obtain API
credentials and configure authentication for accessing the API. Develop the application interface for uploading
images and displaying recognition results. Integrate the cloud vision API into the application to analyze and
process image data. Enhance the application with additional features like text translation or object
detection based on recognition results. Seventh on the list is chat application with Firebase Fire Store. Create a
real-time chat application using Firebase Fire Store for data storage and synchronization. The steps are set up a
Firebase project and enable Fire Store as a database service. After that, design the chat application interface
for sending and receiving messages in real time. Then implement the user authentication and authorization to
secure access to chat rooms. After which integrate Fire Store SDK into the application to store and sync chat data
across devices. Now test the application thoroughly to ensure reliability and real-time synchronization of messages.
Next we have continuous integration or continuous deployment pipeline with GitHub. Here you have to set up a CI/CD
pipeline for automating software development processes using GitHub actions. For that create a GitHub
repository for your project and enable GitHub actions. Define workflows in YL format to automate tasks like testing,
building, and deploying. Configure triggers to initiate workflows based on events such as code pushes or pull
requests. Now integrate thirdparty tools or services for additional tasks like code quality analysis or deployment to
cloud platforms. After which monitor workflow runs and troubleshoot any issues that arise during the CI/CD
process. Next in our list is IoT data analytics with IoT hub and stream analytics. Here you have to collect and
analyze data from IoT devices using Azure IoT hub and stream analytics. Now the steps for this project include
firstly set up an Azure account and create an IoT hub instance for managing IoT devices. Connect IoT devices to the
IoT hub and configure message routing for data injection. Now create an Azure stream analytics job to process
streaming data from IoT devices. Define queries and transformations to analyze and aggregate IoT data in real time and
then visualize the analyze data using Azure services like PowerBI or Azure data explorer for actionable insights.
Last project on the list we have budget friendly travel planner. In this project you are required to develop a travel
planning application that helps users discover budget friendly destinations and manage trip expenses. Now the steps
include firstly design the user interface of the travel planner application allowing users to search for
destinations and plan itaries. Integrate APIs like Google maps, open weather map and sky scanner to provide travel
information such as flight prices, weather forecasts and accommodation options. Now implement features for
budget tracking, expense management, and itinary customization based on user preferences. Test the application across
different devices and platforms to ensure usability and performance. And at last, deploy the travel planner
application to a cloud hosting platform for accessibility and scalability and continue to iterate based on user
feedback and usage metrics. So there you have it, top 10 cloud projects for beginners that can help you create a
portfolio and land a job. We have explored a diverse range of projects that offer valuable insights into
various aspects of cloud computing. Welcome everyone. In today's rapidly evolving digital landscape, the
importance of mastering cloud computing cannot be overstated. To thrive in modern IT environments, professionals
must possess a robust set of crucial skills. These include managing and maintaining infrastructures that are
increasingly complex and distributed, ensuring robust security measures to protect sensitive data against emerging
threats, optimizing cost to make the most of IT investments, and mastering effective deployment strategies that
ensure scalability and reliability across various platforms. At SimplyLearn, we understand these skills
form the backbone of successful cloud computing practices. With over a decade of leadership in cloud solutions, our
company has been at the forefront of providing top tier education and services to a wide range of clients. We
are committed to making highlevel cloud skills accessible to everyone from aspiring IT professionals to seasoned
experts. That's why our courses are designed not just to inform but to transform your approach to cloud
technology. Whether you are just starting out or looking to deepen your expertise, we provide the tools and
training you need to navigate and excel with a cloud domain. With our guidance, you will not only keep pace with current
technologies, but also stay ahead of the curve in this dynamic field. So, let's start with the courses. And number one
on our list is the post-graduate program in cloud computing. A comprehensive course designed in collaboration with
Caltech CTME. Let's explore what makes this program a standout choice. This postgraduate program is your gateway to
becoming a cloud expert. It's structured to take you through a deep dive into the world of cloud computing, covering
crucial areas from basic concepts to advanced applications. And by joining this program, you're setting yourself up
for success in a field that's at the forefront of technological advancement. It's designed to arm you with the skills
needed to design, implement, and manage cloud solutions that are innovative and efficient. Moreover, graduates of this
program find themselves well prepared for competitive roles worldwide such as cloud architects, cloud project managers
or cloud service developers working for top tech companies across the globe. Moreover, you will gain mastery or
essential skills like managing cloud infrastructures, cloud security, data compliance and disaster recovery
planning. Completing this program earns you a post-graduate certificate from Keltech CTME recognized globally and
respected in the tech industry. And this course offers hands-on experience with leading tools such as AWS, Microsoft
Azure, Google Cloud, Docker, and Kubernetes. You will tackle real world projects that challenge you to apply
your learning to build scalable secure cloud architectures and manage extensive cloud deployments. If you're ready to
start your journey in cloud computing with a top rated program, visit the description box or pin comment to
enroll. So, next on our list at number two is the cloud solutions architect masters program. So let's delve into why
this course is crucial for those aspiring to become cloud architects. This mast's program is designed to turn
you into a solutions architect capable of designing and implementing robust and scalable cloud solutions. This course
ensures you gain comprehensive knowledge and skills. So choosing this program means selecting a path to mastery in
cloud architecture. It's perfect for IT professionals who want to excel in creating high performing cloud solutions
that meet critical business needs. And this program opens doors to roles like cloud solutions architect and enterprise
architect among others. Graduates often step into significant positions into multinational corporations and tech
firms around the world where they strategize and oversee cloud infrastructure deployments. And
throughout this program you will master skills in cloud deployment, migration and infrastructure management. You will
also learn about securing cloud environments and optimizing them for cost and performance. Completing this
course will provide you with a holistic view of cloud architecture backed by master certification recognized in the
industry. You'll get hands-on experience with essential tools used by cloud architects including AWS, Microsoft
Azure and Google Cloud Platforms. The course also includes real life projects that challenge you to solve problems and
design solutions that are not only efficient but also scalable and secure. If designing talk to your cloud
solutions is your goal, then the cloud solutions architect masters program is your stepping stone, visit the
description box or pin comment to check out the course link. Moving on to the third course in our series, we have the
AWS cloud architect certification training. This program is essential for those looking to specialize in Amazon
Web Services, the leading cloud services platform. This certification course is tailored to develop your skills in
designing and managing scalable, reliable applications on AWS. It's ideal for solutions architects, programmers,
and anyone interested in building robust cloud solutions using AWS technology. Enrolling in this course will elevate
your technical understanding and capabilities, preparing you to lead cloud initiatives using AWS powerful
platform. You will gain in-depth knowledge of AWS architectural principles and services. Learn to design
and deploy scalable systems and understand elasticity and scalability concepts. This course culminates in
earning on AWS certification adding significant value to your professional credentials. Practical hands-on labs and
project work will let you work directly with AWS technologies ensuring you can apply what you learn immediately in any
cloud environment. So are you ready to become an AWS cloud architect? Sign up for this course today to start your
specialized training with the link mentioned in the description box and pin comment. Next up at number four in our
series is the Azure cloud architect certification training. This course is designed for those who aim to excel in
designing and implementing Microsoft Azure solutions. Azure is one of the leading cloud platforms and this
training equips you to master its complex systems and services preparing you to tackle real world challenges with
confidence. With this program, you will transform into a skilled cloud architect capable of managing Azure's extensive
features and services. It's an ideal path for IT professionals looking to specialize in Azure for advancing their
careers. The course covers a range of key areas including Azure administration as your networking security and identity
management. You'll also prepare for the Microsoft certified Azure Architect Technologies exam, a highly respected
credential in the industry. Moreover, hands-on labs and projects throughout the course ensure you gain practical
experience with Azure. This training includes simulations and real life scenarios to provide you with the skills
needed to succeed in any Azure environment. So, are you ready to harness the power of Microsoft Azure?
Enroll in our Azure cloud certification training today with a link in the description box and pin comment.
Rounding out our top five cloud computing courses. At number five, we have the Azure DevOps Solution Expert
Masters Program. This advanced training is tailored for those who wish to blend cloud technology with DevOps practices
using the Microsoft Azure platform. This mast's program is designed to empower IT professionals with the skills to
implement DevOps practices effectively in Azure environments. It's perfect for those aiming to specialize in building,
testing, and maintaining cloud solutions that are efficient and scalable by integrating DevOps with cloud
innovations. This course ensures your adapt at speeding up to IT operations and enhancing collaboration across
teams. It's an essential skill set for increasing business agility and IT efficiency. The program covers a
comprehensive range of topics including continuous integration and delivery CI/CD infrastructure as code and
monitoring and feedback mechanisms. Graduates are well prepared to lead DevOps initiatives and handle complex
cloud infrastructures. Moreover, you'll work with popular tools like Jenkins, Docker, Enible and Kubernetes alongside
Azure specific technologies. Real world projects are integrated throughout the course to provide hands-on experience
and insights into actual DevOps challenges. If you are ready to advance your career by mastering Azure and
DevOps, enroll in our Azure DevOps solution expert masters program today. Check out the link in the pin comment
and description box. So, thank you for joining us as we explore these top five cloud computing courses. Each program is
designed to not only meet but exceed the demands of today's digital landscape, preparing you for a future where cloud
technologies ubiquitous. You can check out pin comment and description box for respective course links. On to the top
10 cloud computing projects. We have divided this video into three parts. The first part is for beginners. Moving on
to the first project for beginners. We have a cloud-based file storage. In this project, you will create a simple
cloud-based file storage system similar to a Dropbox or a Google Drive. Users can upload, download, and manage files
stored in the cloud. The part to create this project will start by choosing a cloud storage service. Select a cloud
storage service provider like AWSS S3, Google Cloud Storage, or Azure Blob Storage. The next step is creating an
account. Sign up for an account with the chosen cloud provider if you don't already have one. The next step is to
set up a bucket or a container. Create a storage bucket or container to store your files. The next step is to develop
a web interface. Create a web- based interface for users to interact with your storage system. You can use HTML,
CSS, and JavaScript for this. The next step is to implement file upload and download. Add functionality for users to
upload and download files to or from their storage bucket using the cloud providers SDK or API. The next step is
user authentication. Implement user authentication to secure access to files. You can use services like
Firebase authentication or build one of your own. The next step is testing and deployment. Test your file storage
system locally and once satisfied deploy it to the cloud. Make it publicly accessible if needed. The key learning
points of this project is you will learn the basics of cloud storage services, file uploading and downloading and user
authentication. Moving on to project number two, we have a website hosting service. This project involves setting
up an essential website on a cloud platform like AWS, Azure or Google Cloud. You will deploy a static or
dynamic website and make it accessible to users worldwide. Let's move on to the path to create this project. The first
step is to select a cloud hosting service. Choose a cloud hosting platform like Amazon Web Services, Azure or
Google Cloud. The next step is to design your website. Create the content and design for your website. You can use
HTML, CSS and JavaScript for static websites or web frameworks for dynamic ones. Then register a domain name.
purchase a domain name from a domain registar, for example, GoDaddy, Namechep, etc. The next step is to
configure DNS. Configure the DNS setting to point to your cloud hosting servers IP address. The next step is to set up a
virtual server. Create a virtual server instance on your chosen cloud platform, for example, AWS EC2, Azure VM, and
Google Compute Engine. The next step is to install web server software. Install and configure web server software like
Apache on your virtual server. And the last step is to upload website files. Upload your website files to the server.
And then you are done with the project. The key learning points of this project. You will learn how to configure a web
server. You will deploy web applications and manage domain names. And finally, you will ensure high availability. The
next project we have for beginners is a cloud-based calculator. Create a simple web-based calculator application and
deploy it on a cloud platform like Heroku. Users can perform basic calculations using this calculator.
Let's talk about the path to create this project. You'll start by selecting a cloud platform. Choose a cloud platform
like Heroku, AWS or Azure for deployment. Then develop the calculator. Create a simple web-based calculator.
You can use HTML, CSS and JavaScript for this. Then set up a development environment. Install necessary
development tools and libraries on your local machine. Then move ahead with version control. Use version control
systems like git to manage your project. Then create a cloud account. Sign up for an account with your chosen cloud
platform. Then comes the deployment part. Deploy your calculator application to the cloud platform of your choice.
For Heroku, this might involve pushing your code to a git repository. And then comes the testing part. ensure that the
calculator functions correctly on the cloud server. Let's talk about the key learning points of this project. You'll
gain experience in setting up cloud server environments, deploying web applications, and understanding server
client interactions. Moving on to intermediate level projects. These projects are for people with decent
knowledge of cloud computing basics. The first project we have in this category is an IoT data analytics project. Build
a system that collects data from IoT devices, sensors, smart devices, etc. and stores it in cloud databases, for
example, AWS Dynamo DB or Azure Cosmos database. Implement analytics to gain insights from the collected data. Let's
talk about the path to create this project. The first step is IoT device setup. Set up IoT devices or sensors to
collect data. Then choose a cloud database. Select a cloud database service, for example, AWS Dynamo
database, Azure Cosmos database for data storage. Then comes the data injection part. Write code to send data from IoT
devices to the cloud database. Then comes data analytics. Implement data analytics tools or scripts to analyze
the collected data. Then comes data visualization. Create visualizations to present insights from the data using
tools like AWS Quicksite or data visualizations libraries. And then the last part is monitoring and scaling.
Implement monitoring and scaling mechanisms to handle increased data volumes. So let's talk about the key
learning points of this project. You will explore IoT data injection, cloud database management, data analytics, and
of course data visualization. Moving on to the next project, we have a video streaming service. So you will create a
video streaming service where users can upload and stream videos from the cloud. Implement features like video encoding,
content delivery, and user authentication. Let's talk about the path to create this project. You will
start by selecting cloud services. Choose cloud services for video storage, for example, AWS S3. For encoding, you
can choose AWS elastic transcoder. And for content delivery, you can choose AWS CloudFront. The next step is to design
the application. Plan the architecture of your video streaming platform, including user interfaces for uploading
and streaming. The next step is user authentication. Implement user authentication using services like AWS
Cognto. Then comes the video uploading part. Allow users to upload videos to the cloud storage. And then comes the
video encoding. Implement video encoding to create different quality versions of uploaded videos. Then comes the content
delivery. Use a content delivery network or CDN to distribute videos efficiently to the users. And the last step is
testing and scalability. test video streaming and ensure it scales to handle concurrent users and varying network
conditions. Let's talk about the key learning points for this project. You will learn about video processing,
content delivery networks and secure user access control. Moving on to the next project, we have cloud-based
e-commerce website. Build an e-commerce platform hosted on cloud. Develop features like product listing, shopping
carts, and secure payment processing. Let's talk about the path to create this project. You will start by selecting
cloud platform. Choose a cloud platform for example AWS or Azure to host your e-commerce application. Then comes the
e-commerce framework. Choose an e-commerce framework for example Woo Commerce or Shopify or build a custom
solution. Then comes the product listings part. Add product listings and descriptions with images. Then comes the
shopping cart. Implement a shopping cart functionality for users to add products. Then comes the secure payment
processing. Set up secure payment processing using payment gateways like Stripe or PayPal. Then comes the user
authentication part. Implement user authentication for customer accounts. And finally, the testing and security.
Thoroughly test your e-commerce website, ensuring it's secure and capable of handling transactions. What are the key
points of this project? You'll gain experience in building scalable web applications, handling online
transactions securely, and managing customer data. talking about advanced level cloud computing projects. For
advanced level cloud computing projects, you should master cloud infrastructure and cloud services. The first advanced
level project we have for today is machine learning with cloud. Develop a machine learning model and deploy it on
a cloud server for scalable processing. This project involves working with large data sets and real-time data analysis.
The first step is machine learning model. Develop a machine learning model using libraries like TensorFlow or
circuit learn. Then comes the data preparation part. Prepare and clean the data for training and testing. Then
choose a cloud machine learning service. Select a cloud machine learning service for example AWS SageMaker, Google AI
platform and etc. Then comes the model deployment part. Deploy a machine learning model on the cloud platform.
And finally the scalability. Ensure the model can handle real-time predictions and scale as needed. Let's talk about
the key learning points of this project. You'll delve into machine learning, cloud-based data processing, and model
deployment. The next project we have in this list is the cloud-based virtual desktop. Create a system that allows
users to access virtualized desktop environments hosted on the cloud. This project focuses on remote desktop
solutions. Talking about the path to create this project, you should start by selecting cloud service. Choose a cloud
service for virtual desktop. For example, Amazon Workspace, Windows virtual desktop or Azure. Then comes the
configuration part. Configure virtual desktop instances with desired specifications. User management. Create
and manage user accounts for accessing virtual desktops. Security. Implement security measures for data protection
during remote desktop access. Then comes the access control part. Set up access control policies to restrict user
permissions. And lastly, testing. test the remote desktop access from different devices and locations. Talking about the
key learning points of this project, you'll learn about virtualization, remote desktop technologies, and user
management in a cloud-based context. Moving on to the third advanced level cloud project, we have a cloud gaming
service. Build a cloud-based gaming service similar to Google Stadia. This project involves GPU virtualization, low
latency streaming, and multiplayer game server management. Talking about the path to create this project, you should
start by selecting cloud infrastructure. Choose a cloud infrastructure. You can either choose AWS or Azure for hosting
game servers. Then comes the game server setup. Set up game servers for hosting games. Then comes the low latency
streaming. Implement low latency streaming technologies for game play. Then comes the multiplayer support. Add
multiplayer support for users to play together. Then comes the user management. create user accounts and
manage players profiles. And finally comes load balancing. Implement load balancing to handle increased player
loads. Talking about the key learning points of this project, you will explore cloud-based gaming infrastructure,
server management, and real-time multiplayer networking. Moving on to the last cloud computing project, we have a
healthcare data management system. Develop a cloud-based system for storing and managing healthcare related data
such as patient records. Ensure compliance with healthcare industry standards. For example,
HIPAA. Talking about the path to create this project, you should start by ensuring compliance with healthcare
industry standards. HIPPA compliance. Understand and adhere to HIPPA compliance regulations. Cloud platform
selection. Choose a HIPAA compliant cloud platform. For example, AWS healthcare APIs and Azure for
healthcare. Then comes the data storage part. Create a secure cloud-based data storage system for healthcare records.
Then comes the access control. Implement strict access controls and encryption for data security. Then comes the audit
trials. Maintain audit trials for data access and modifications. And then at last comes the user training. Ensure
staff are trained on HIPA compliance and data handling protocols. Talking about the key learning points of this project,
you will gain expertise in cloud security, data privacy, and compliance while addressing real world healthcare
data challenges. These projects provide a progressive learning path, allowing individuals to start with basic cloud
concepts and gradually advance to more complex and impactful cloud computing applications. Each project offers
valuable experience and skills that can be applied to real life scenarios and career development in the cloud
computing field. I'm here to walk you through some of the AWS interview questions which we find are important
and our hope is that you would use this material in your interview preparation and be able to crack that cloud
interview and step into your dream cloud job. By the way, I'm an cloud technical architect trainer and an interview
panelist for cloud network and DevOps. So as you progress in watching you're going to see that these questions are
practical scenario-based questions that tests the depth of the knowledge of a person in a particular AWS product or in
a particular AWS architecture. So why wait? Let's move on. All right. So in an interview you would find yourself with a
question that might ask you define and explain the three basic types of cloud services and the AWS products that are
built based on them. See here it's a very straightforward question just explain three basic types of cloud
service and when we talk about basic type of cloud service it's compute obviously that's a very basic service
storage obviously uh because you need to store your data somewhere and networking that actually connects couple of other
services to your application these basic will not include monitoring these basic will not include analytics because they
are considered as optional. They are considered as advanced services. You could choose a non-cloud service or a
product for monitoring of and for analytics. So they're not considered as basic. So when we talk about basics,
they are compute, storage and networking. And the second part of the question says explain some of the AWS
products that are built based on them. Of course compute EC2 is a major one. That's that's the major share of the
compute resource. And then we have platform as a service which is elastic beantock. And then function as a service
which is lambda autoscaling and light sale are also part of compute services. So the compute domain it really helps us
to run any application and the compute service helps us in managing the scaling and deployment of an application. Again
lambda is a compute service. So the comput service also helps in running event initiated stateless applications.
The next one was storage. A lot of emphasis is on storage these days because if there's one thing that grows
in a network on a daily basis that's storage. Every new day we have new data to store process manage. So a storage is
again a basic and an important cloud service. And the products that are built based on the storage services are S3
object storage, Glacier for archiving, EBS elastic block storage as a drive attachment for the EC2 instances and EFS
file share for the EC2 instances. So the storage domain helps in the following aspects. It holds all the information
and we can also archive old data using storage which would be glacier and any object files and any requirement for
block storage can be met through elastic block store and S3 which is again an object storage. Talking about uh
networks it it's just not important to answer the question with the name of the services and the name of the product.
It'll also be good if you could go in depth and explain how they can be used. All right. So that actually proves you
to be a person knowledgeable enough in that particular service or product. So talking about networking domain VPC
networking can't imagine networking without VPC in in a cloud environment especially in AWS cloud environment. And
then we have route 53 for domain resolution or uh for DNS. And then we have cloudfront which is an edge caching
service that helps customers get or customers to read their application with low latency. So networking domain helps
with some of the following use cases. It controls and manages the connectivity of the AWS services within our account. And
we can also pick an IP address range. If you're a network engineer or if you are somebody who works in networks or are
planning to work in network, you will soon realize the importance of choosing your own IP address for easy
remembering. So having an option to have your own IP address in the cloud, own range of IP address in the cloud, it
really helps really really helps in cloud networking. The other question that get asked would be the difference
between the availability zone and the region. Actually the question generally gets asked sort of to test how well you
can actually differentiate and also correlate the availability zone and the region relationship right so a region is
a separate geographic area like the US west one I mean which represents north California or the AP south which
represents Mumbai. So regions are a separate geographic area. On the contrary, availability zone resides
inside the region. You shouldn't stop there. You should go further and explain about availability zones and
availability zones are isolated from each other and some of the services will replicate themselves within the
availability zone. So availability zone does replication within them but regions they don't generally do replication
between them. The other question you could be asked is uh what is autoscaling? What do we achieve by
autoscaling? So in short, autoscaling it helps us to automatically provision and launch new instances whenever there is
an demand. It not only helps us meeting the increasing demand, it also helps in reducing the resource usage when there
is low demand. So autoscaling also allows us to decrease the resources or resource capacity as per the need of
that particular arc. Now this helps business in not worrying about putting more effort in managing or continuously
monitoring the server to see if they have the needed resource or not because autoscaling is going to handle it for
us. So business does not need to worry about it. And autoscaling is one big reason why people would want to go and
pick a cloud service especially an AWS service. The ability to increase and shrink based on the need of that arc.
That's how powerful is autoscaling. The other question you could get asked is what's your targeting in CloudFront? Now
we know that CloudFront is caching and it caches content globally in the Amazon caching service globalwide. The whole
point is to provide users worldwide access to the data from a very nearest server possible. That's the whole point
in using or going for CloudFront. Then what do you mean by geo targeting? Geo targeting is showing customer and
specific content. Based on language, we can customize the content. Based on what's popular in that place, we can
actually customize the content. The URL is the same, but we could actually change the content a little bit. Not the
whole content, otherwise it would be dynamic, but we can change the content a little bit, a specific file or a picture
or a particular link in a website and show customized content to users who will be in different parts of the globe.
So, how does it happen? CloudFront will detect the country where the viewers are located and it'll forward the country
code to the origin server and once the origin server gets the specialized or a specific country code it will change the
content and it'll send to the caching server and it get cached there forever and the user gets to view a content
which is personalized for them for the country they are in. The other question you could get asked is the steps
involved in using cloud formation or creating a cloud formation or backing up an environment within cloud formation
template. We all know that if there is a template we can simply run it and it provisions the environment but there is
a lot more going into it. So the first step in moving towards infrastructure as a code is to create the cloud formation
template which as of now supports JSON and YAML file format. So first create the cloud formation template and then
save the code in an S3 bucket. S3 bucket serves as the repository for our code and then from the cloud form call the
file in the S3 bucket and create a stack. And now cloud formation uses the file, reads the file, understands
services that are being called, understands the order, understands how they are connected with each other.
Cloud formation is actually an intelligent service. It understands the relation based on the code. It would
understand the relationship between the different services and it would set an order for itself and then would
provision the services one after the other. Let's say a service has a dependency and the dependent service the
other service which this service let's say service A and B service B is dependent on service A let's say cloud
formation is an intelligent service it would provision the resource A first and then would provision resource B what
happens if we inverse the order if we inverse the order resource B first gets provisioned and because it does not have
dependency chances that the cloud formation's default behavior is that if something is not provisioned properly,
something is not healthy, it would roll back. Chances that the uh environment provisioning will roll back. So to avoid
that, cloud formation first provisions all the services that has or that's dependent on that's dependent by another
service. So it provisions those service first and then provisions the services that has dependencies. And if you're
being hired for uh a DevOps or you know if the interviewer wanted to test your skill on systems side this definitely
would be a question in his list. How do you upgrade or downgrade a system with near zero downtime? Now everybody's
moving towards zero downtime or near zero downtime. All of them want their application to be highly available. So
the question would be how do you actually upgrade or downgrade a system with near zero downtime? Now we all know
that I can upgrade an EC2 instance to a better EC2 instance by changing the uh instance type stopping and starting but
stopping and starting is going to cause a downtime. Right? So that's you should be answering or you shouldn't be
thinking in those terms because that's a wrong answer. Specifically the interviewer wants to know how do you
upgrade a system with zero downtime. So upgrading system with zero downtime it includes launching another system
parallelly with the bigger uh EC2 instance type with a bigger capacity and install all that's needed. If you are
going to use an AMI of the old machine well and good you don't have to go through installing all the updates and
installing all the application from the AMI. Once you have launched it in a bigger instance, locally test the
application to see if it is working. Don't put it on production yet. Test the application to see if it is working. And
if the application works, we can actually swap if your server is behind and behind uh Route 53. Let's say all
that you could do is go to route 53, update uh the uh information with the new IP address, new IP address of the
new server, and that's going to send traffic to the new server. Now, so the cut over is handled. Or if you're using
static IP, you can actually remove the static IP from the old machine and assign it to the new machine. That's one
way of doing it. Or if you are using elastic nick card, you can actually remove the nick card from the old
machine and attach the nick card to the new machine. So that way we would get near zero downtime. If you're hired for
an architect level, you should be worrying about cost as well along with the technology. And this question would
test how well you manage cost. So what are the tools and techniques we can use in AWS to identify and correct identify
and know that we are paying the correct amount for the resources that we are using or how do you get a visibility of
your AWS resources running? One way is to check the billing. There's a place where you can check the top services
that were utilized. It could be free and it could be paid service as well. Top services that can be utilized. It's
actually in the dashboard of the cost management console. So that table here shows the top five most used services.
So looking at it, you can get it. All right. So I'm using lot of storage. I'm using a lot of EC2. Why is storage high?
You can go ahead and try to justify that and you will find if you are storing things that shouldn't be storing and
then clean it up. Why is compute capacity so high? Why is data transfer so high? So if you start thinking in
those levels, you'll be able to dig in and clean up unnecessary and be able to save your bill. And there are cost
explorer services available which will help you to view your usage pattern or view your spending for the past 13
months or so. And then it'll also forecast for the next 3 months. Now how much will you be using if your pattern
is like this? So that will actually help and will give you a visibility on how much you have spent, how much you will
be spending if the trend continues. Budgets are another excellent way to control cost. You can actually set up
budget. All right, this is how much I am willing to spend for this application for this team or for this month for this
particular resource. So you can actually put a budget mark and anytime it exceeds anytime it's nearing you would get an
alarm saying that well we're about to reach the allocated budget amount stuff like that. That way you can go back and
know and you know that how much the bill is going to be for that month or you can take steps to control bill amount for
that particular month. So AWS budget is another very good tool that you could use. Cost allocation tags helps in
identifying which team or which resource has spent more in that particular month. Instead of looking at the bill as one
list with no specifications into it and looking at it as an expenditure list, you can actually break it down and tag
the expenditure to the teams with cost allocation tags. The dev team has spent so much, the uh production team has
spent so much. the training team has spent more than the dev and the production team. Why is that? Now you'll
be able to you know think in those levels only if you have cost allocation tags. Now cost allocation tags are
nothing but the tags that you would put when you create a resource. So for production services you would put as a
production tag. You would create a production tag and you would associate that resources to it. And at a later
point when you actually pull up your bill, that's going to show a detailed list of this is the owner, this is the
group, and this is how much they have used in the last month. And you can move forward with your investigation and
encourage or stop users from using more services with the cost allocation tax. The other famous question is are there
any other tools or is there any other way of accessing AWS resource other than the console? console is GUI, right? So,
in other words, other than GUI, how would you use the AWS resource and how familiar are you with those tools and
technologies? The other tools that are available that we can leverage and access the AWS resource are of course
putty. You can configure Putty to access the AWS resources like log into an EC2 instance. An EC2 instance does not
always have to be logged in through the console. You could use putty to log to an EC2 instance and like the jump box
like the proxy machine and like the gateway machine and from there you can actually access the rest of the
resources. So this is an alternative to the console and of course we have the AWS CLI in any of the Linux machines or
Windows machines we can install. So that's 2 3 and 4. We can install AWS CLI for Linux, Windows also for Mac. So we
can install them and from there from your local machine we can access run AWS commands and access provision monitor
the AWS resources. The other ones are we can access the uh AWS resource programmatically using AWS SDK and
Eclipse. So these are bunch of options we have to use the AWS resource other than the console. If you're interviewed
in a company or by a company that focuses more on security and want to use AWS native services for their security
then you would come across this question. What services can be used to create a centralized logging solution?
The basic services we could use are Cloudatch logs, store them in S3 and then use elastic search to visualize
them and use Kinesis to move the data from S3 to elastic search. Right? So log management, it actually helps
organizations to track the relationship between operational and security changes and the events that got triggered based
on those logs. Instead of logging into an instance or instead of logging into the environment and checking the
resources physically, I can come to a fair conclusion by just looking at the logs. Every time there's a change, the
system will scream and it gets tracked in the cloudatch and then cloudatch pushes it to S3. Kinesis pushes the data
from S3 to elastic search and uh I can do a timebased filter and I would get an a fair understanding of what was going
on in the environment for the past 1 hour or whatever the time window that I wanted to look at. So it helps in
getting a good understanding of the infrastructure as a whole. All the logs are getting saved in one place. So all
the infrastructure logs are getting saved in one place. So it's easy for me to look at it in an infrastructure
perspective. So we know the services that can be used and here are some of the services and how they actually
connect to each other. It could be logs that belongs to one account. It could be logs that belongs to multiple accounts.
It doesn't matter. You know those three services are going to work fairly good and they're going to inject or they're
going to like suck logs from the other accounts put it in one place and help us to monitor. So as you see you have cloud
watch here that actually tracks the metrics. You can also use cloud trail if you want to log API calls as well push
them in an S3 bucket. So there are different types of log flow logs are getting captured in an instance.
Application logs are getting captured from the same VPC from a different VPC from the same account. Them are analyzed
using elastic search using the Kibbana client. So step one is to deploy the ECS cluster. Step two is to restrict access
to the ECS cluster because it's valid data. You don't want anybody to put their hands and access their data. So
restrict access to the ECS dashboard. And we could use Lambda also to push the uh data from Cloudatch to the uh elastic
search domain. And then Kibbana is actually the graphical tool that helps us to visualize the logs. Instead of
looking at log as just statements or a bunch of characters, a bunch of files, Kibbana helps us to analyze the logs in
a graphical or a chart or a bar diagram format. Again, in an interview, the interview is more concerned about
testing your knowledge on AWS security products, especially on the logging, monitoring, event management or incident
management. Then you could have a question like this. What are the native AWS security logging capabilities? Now
most of the services have their own logging in them like have their own logging like S3. S3 has its own login
and cloudfront has its own logging, RDS has its own logging, VPC has its own logging. In additional there are account
level loginins like cloud trail and AWS config services. So there are variety of logging options available in the AWS
like cloud trail config cloud front red shift logging RDS logging VPC flow logs s3 object logging s3 access logging
stuff like that so we're going to look at uh two service in specific cloud trail now this cloud trail the very
first product in that picture we just saw the cloud trail provides an very high level history of the API calls for
all the account and with that we can actually perform a very good security analysis a security analysis of our
account and these logs are actually delivered to you can configure it they can be delivered to S3 for longtime
archival and based on a particular event it can also send an email notification to us saying hey just got this error
thought I'll let you know stuff like that the other one is config service now config service helps us to understand
the configuration changes that happened in our environment and we can also set up notifications based on the
configuration changes. So it records the cumulative changes that are made in a short period of time. So if you want to
go through the lifetime of a particular resource, what are the things that happened? What are the things it went
through? They can be looked at using AWS config. All right. The other question you could get asked is if uh you know
your role includes taking care of cloud security as well then the other question you could get asked is the native
services that Amazon provides to mitigate DOS which is denial of service. Now not all companies would go with
Amazon native services. But there are some companies which uh want to stick with Amazon native services just to save
them from the headache of managing the other softwares or bringing in another tool a third party tool into managing
DOS. They simply want to stick with Amazon proprietary Amazon native services. And a lot of companies are
using Amazon service to prevent DOS denial of service. Now denial of service is if you already know what denial of
service is well and good. If you do not know then let's know it now. Denial of service is a user trying to or
maliciously making attempt to access a website or an application. The user would actually create multiple sessions
and he would occupy all the sessions and he would not let legitimate users access the servers. So he's in turn denying the
service for the user. A quick uh picture review of uh what denial of service is. Now look at it. These users instead of
making one connection, they are making multiple connections. And there are cheap software programs available that
would actually trigger connections from different computers in the internet with different MAC addresses. So everything
kind of looks legimate for the uh server and it would accept those connections and it would keep the sessions open. the
actual users won't be able to use them. So that's denying the service for the actual users. Denial of service. All
right? And distributed denial of service is uh generating attacks from multiple places you know from a distributed
environment. So that's distributed denial of service. So the tools the native tools that helps us to prevent
the denial of service attacks in AWS is cloud shield and web access firewall AWS VAF. Now they are the major ones. They
are designed to mitigate a denial of service. If your website is often bothered by denial of service, then we
should be using AWS shield or AWS WF. And there are a couple of other tools that also when I say that also does
denial of service is not their primary job, but you could use them for denial of service. Route 53's purpose is to
provide DNS. CloudFront is to provide caching. Elastic load balancer. LB's work is to provide load balancing. VPC
is to create an secure virtual private environment but they also support mitigating denial of service but not to
the extent you would get in AWS shield and AWS WF. So AWS shield and WF are the primary ones but the rest can also be
used to mitigate distributed denial of service. The other tricky question is uh this actually will test your familiarity
with the region and the services available in the region. So when you're trying to provision a service in a
particular region, you're not seeing the service in that region. How do we go about fixing it or how do we go about
using the service in the cloud? It's a tricky question and if you have not gone through such situation, you can totally
blow it away. You really need to have a good understanding on regions, the services available in those regions and
what if a particular service is not available, how to go about doing it. The answer is not all services are available
in all regions. Anytime Amazon announces a new service, they don't immediately publish them on all regions. They start
small and as in when the traffic increases, as in when it becomes more likable to the customers, they actually
move the service to different regions. So as you see in this picture within America, North Virginia has more
services compared to Ohio or compared to North California. So within North America itself, North Virginia is the
preferred one. So similarly there are preferred regions within Europe, Middle East and Africa and preferred regions
within Asia Pacific. So anytime we don't see a service in a particular region, chances that the service is not
available in that region yet, we got to check the documentation and find the nearest region that offers that service
and start using the service from that region. Now you might think well if I'm looking for a service in Asia let's say
in Mumbai and if it is not available why not simply switch to North Virginia and start using it. You could but you know
that's going to add more latency to your application. So that's why we need to check for application which is check for
region which is very near to the place where you want to serve your customers and find nearest region instead of
always going back to North Virginia and deploying your application in North Virginia. Again there's a place there's
a link in AWS.com that you can go and look for services available in different region and that's exactly what you're
seeing here and if your service is not available in a particular region switch to the other region that provides your
service the nearest other region that provides that service and start using service from there. With the uh coming
up of cloud, a lot of companies have turned down their monitoring team. Instead, they want to go with the
monitorings that cloud provides. You know, nobody wants to or at least many people don't want to go through the
hassle of at least new startups and new companies that are thinking of having a monitoring environment. Now, they don't
want to go with traditional knock monitoring. Instead, they would like to leverage AWS monitoring available
because it monitors a lot of stuff. Not just the availability but it monitors a lot of stuff like failures, errors, it
also triggers, emails, stuff like that. So how do you actually set up a monitor to website? I mean how do you set up a
monitor to monitor the uh website metrics in real time in AWS? The simple way anytime you have a question about uh
monitoring cloudatch should strike your mind because cloudatch is meant for monitoring is meant for collecting
metrics is meant for providing graphical representation of uh what's going on in a particular network at a particular
point of time. So cloudatch cloudatch helps us to monitor applications and using cloudatch we can monitor the state
changes not only the state changes the autoscaling life cycle events anytime there there are more services added
there is a reduction in the number of servers because of less usage and very informative messages can be received
through cloudatch any cloudatch can now support scheduled events. If you want to schedule anything, Cloudatch has an
event that would schedule an action, right? Schedule a trigger, time based, not incident based. You know, anything
happening and then you get an action happening that's incident based. On the other hand, you can simply schedule few
things on time based. So that's possible with cloudatch. So this cloudatch integrates very well with lot of other
services like notifications for notifying uh the user or for notifying the administrator about it and it can
integrate well with lambda. So to trigger an action anytime you're designing an autohealing environment,
this cloudatch can actually monitor and send an email if we are integrating it with SNS simple notification service or
this cloudatch can monitor and uh based on what's happening it can trigger an event in lambda and that would in turn
run a function till the uh environment comes back to normal. So Cloudatch integrates well with lot of other AWS
services. All right. So Cloudatch has u three statuses. Green when everything is going good, yellow when the service is
degraded and red when the service is not available. Green is good. So we don't have to do anything about it. But
anytime there's an yellow, the picture that we're looking at, it's actually calling an Lambda function to debug the
application and to fix it. And anytime there's a red alert, it immediately notifies the um owner of the application
about well the service is down and here is the report that I have. Here is the metrics that I've collected about the
service stuff like that. If the job role requires you to manage the servers as well, there are certain job roles which
are on the system side. There are certain job roles which is development plus system side. Now you're responsible
for the application and the server as well. So if that's the case, you might be tested with some basic questions like
the different types of virtualization in AWS and what are the difference between them. All right, the three major types
of virtualization are HVM which is hardware virtual machine. The other one is PV para virtualization and the third
one is PV on HVM para virtualization on hardware virtual module. All right. The difference between them or actually
describing them is actually the difference between them. HVM it's actually a fully virtualized hardware.
You know the whole hardware is virtualized and all virtual machines act uh separate from each other. And these
VMs are booted by executing master boot record in the root block. And when we talk about para virtualization, paragrub
is actually the special boot loader which boots the uh PV AMIs. And when we talk about PV on HVM, it's it's actually
the the marriage between HVM and PV. And this parirtualization on HVM, in other words,
PV on HVM, it actually helps operating system take advantage in storage and the network input output available through
the host. Another good question is name some of the services that are not region specific. Now you've been thought that
all services are within a region and some services are within an availability zone. For example, EC2 is within an
availability zone. EBS is within an availability zone. S3 is region specific. Dynamob is region specific.
Stuff like that. VPC is both availability and uh region specific meaning you know subnets are
availability zone specific and VPC is region specific stuff like that. So you might have thought you might have
learned in that combination but there could be some tricky questions that tests you how well you have understood
the region non-reion and availability nonavailability services I should say there are services that are not region
specific that would be am so we can't have IM for every availability zone and for every region which means you know
users will have to use one username and password for one region and anytime they switch to another region they will have
to use another username and password that that's more work and that's not a good design as well. Authentication has
to be global. So IM is a global service and which means it's not region specific. On the other hand, route 53 is
again a regional specific. So we can't have route 53 for every region. Route 53 is not a region specific service. It's a
global service and it's one application. Users access from everywhere or from every part of the world. So we can't
have one URL or one DNS name for each region. If your application is a global application and then web application
firewall works well with CloudFront then CloudFront is an region based service. So the web application firewall it's not
region specific service. It's a global service and CloudFront is again a global service though you can uh you know cache
content on a continent and country basis. It's still considered a global service, right? It's not bound to any
region. So when you activate CloudFront, you're activating it away from region or availability zone. So when you're
activating a web application firewall because it's not a region specific service, you're activating it away from
availability zone and regions. So a quick recap, IM users, groups, roles, and accounts. They are global services.
They can be used globally. Route all 53 services are offered at edge locations and they are global as well. Web
application firewall a service that protects our web application from common web exploits they are global service as
well. CloudFront CloudFront is global content delivery network CDN and they are offered at edge locations which are
a global service in other words non region specific service or beyond region service. All right, this is another good
question as well. In the project that you are being interviewed, if they really want to secure their environment
usingNAT or if they are already securing their environment usingNAT by any of these two methods like NAT gateway ornat
instances, you can expect this question. What are the difference between NAT gateway and NAT instances? Now, they
both saw the same thing, right? So, they're not two different services trying to achieve two different things.
They both serve the same thing but still they do have differences in them. All right. On a high level they both achieve
providing natting for the service behind it. But the difference comes when we talk about the availability of it. NAT
gateway is a managed service by Amazon whereas NAT instance is managed by us. Now I'm talking about the third point
maintenance here. NAT gateway is managed by Amazon. NAT instance is managed by us and availability of NAT gateway is very
high and availability of NAT instance is less compared to the NAT gateway because it's managed by us you know it's on an
EC2 instance which could actually fail and if it fails we'll have to relaunch it but if it is NAT gateway if something
happens to that service Amazon would take care of reprovisioning it and talking about bandwidth it can burst up
to 75 GBs. Now traffic through the NAT gateway can burst up to 75 GBs but for NAT instance it actually depends on the
server that we launch and if we are launching a T2 micro it barely gets any bandwidth. So there's a difference there
and the performance because it's highly available because of the bigger pipe 75 GBs. The performance of the NAD gateway
is very high but the performance of the NAD instance is going to be average. Again, it depends on the size of the NAT
instance that we pick and billing. A billing for NAT gateway is the number of gateways that we provision and the
duration for which we use the NAT gateway. But billing for NAT instance is number of instance and the type of
instance that we use. Of course, number of instance, duration and the type of instance that we use. Security N gateway
cannot be uh assigned meaning it already comes with fullbacked security but in that instance security is a bit
customizable I can go and change the security because it's a server managed by me or managed by us I can always
change the security well allow this allow don't allow this stuff like that size and load of the NAT gateway is
uniform but the size and the load of the u instance changes as per our gateway is a fixed product but not instance can be
small instance can be a big instance so the the size and the load through it varies right the other question you
could get asked is what are the difference between stopping and terminating an EC2 instance now you will
be able to answer only if you have worked on environments where you have your instance stopped and where you have
your instance terminated if you have only used lab and are attending the interview chances are that you might
your way is lost when answering this question it might look like both are the same. Well, stopping and terminating
both are the same but there is a difference in it. So when you stop an instance it actually performs a normal
shutdown on the instance and it simply moves the instance to the stopped state. But when you actually terminate the
instance the instance is moved to this stop state. The EBS volumes that are attached to it are deleted and removed
and we'll never be able to recover them again. So that's a big difference between stopping and terminating an
instance. If you're thinking of using the instance again along with the data in it, you should only be thinking of
stopping the instance. But you should be terminating the instance only if you want to get rid of that instance
forever. If you are being interviewed for an architect level position or a junior architect level position or even
an cloud consultant level position or even in an engineering position this is a very common question that get asked
what are the different types of EC2 instances based on their cost or based on how we pay them. Right? They're all
compute capacity. For example, the different types are on demand instances, spot instances, and reserved instances.
It kind of looks the same. They all provide the compute capacity. They all provide the same type of hardwares for
us. But if you're looking at costsaving or optimizing cost in our environment, we got to be very careful about which
one are we picking. Now we might think that well I'll go with on demand instance because I pay on a per hour
basis which is cheap you know I can use them anytime I want and anytime I don't want I can simply get rid of it by
terminating it you're right but if the requirement is to use the service for 1 year the requirement is to use the
service for 3 years then you'll be wasting a lot of money buying on demand instances you'll be wasting a lot of
money paying on an hourly basis instead we should be going for reserved instance where we can reserve the capacity for
the complete one year or complete 3 years and save huge amount in buying reserved instances. All right. So on
demand is cheap to start with. If you're only planning to use it for a short while but if you're planning to run it
for a long while then we should be going for reserved instance. That is what is cost efficient. So spot instance is
cheaper than ondemand instance. And there are different use cases for spot instance as well. So let's look at one
after the other. The on demand instance, the on demand instance is purchased at a fixed rate for R. This is very
short-term and irregular workloads and for testing for development on demand instance is a very good use case. We
shouldn't be using on demand for production. spot instance. Spot instance allows users to purchase EC2 at a
reduced price and anytime we have more instances, we can always go and sell it in spot instances. I'm referring to
anytime we have more reserved instances, we can always sell them in spot instance uh catalog. And the way we buy spot
instance is we actually put a budget. This is how much I'm willing to pay. All right? Would you be able to give service
within this cost? So anytime the price comes down and meets the cost that we have put in, we'll be assigned an
instance and anytime the price shoots up, the instance will be taken away from us. But in case of ondemand instances,
we have bought that instance for that particular R and it stays with us. But with spot instances, it varies based on
the price. If you meet the price, you get the instance. If you don't meet the price, goes away to somebody else. And
the spot instance availability is actually based on supply and demand in the market. There's no guarantee that
you will get spot instance at all time. All right? So that's a caveat there you should be familiar with. That's a caveat
there you should be aware when you are proposing somebody that we can go for spot instance and save money. It's not
always going to be available. If you want your spot instance to be available to you then we need to carefully watch
the history of the price of the spot instance. Now how much was it last month and how much was it? How much is it this
month? So how can I code or how much can I code stuff like that. So you got to look at those history before you propose
somebody that well we're going to save money using spot instance. On the other hand reserved instance provide cost
savings for the company. We can opt for reserved instances for you know one year or 3 years. There are actually three
types of reserved instances. Light, medium and heavy reserved instances. They are based on the amount that we
would be paying and cost benefit also depends with reserved instance. The cost benefit also depends based on are we
doing all upfront or no upfront or partial payment then split the rest as monthly payments. So there are many
purchase options available but overall if you're looking at using an application for the next 1 year and 3
years you should not be going for on demand instance you should be going for reserved instance and that's what gives
you the cost benefit and in an AWS interview sometimes you might be asked you know how you interact with the AWS
environment are you using CLI are using console and depending on your answer whether console or CLI the panelist put
a score Okay, this person is CLI specific, this person is console specific or this person has used AWS
environment through the SDK stuff like that. So this question test whether you are a CLI person or an console person
and the question goes like this. How do you set up SSH agent forwarding so that you do not have to copy the key every
time you log in? If you have used pury anytime if you want to log to an EC2 instance you will have to put the IP and
the port number along with that you will have to map or we will have to map the key in the puri and this has to be done
every time that's what we would have done in our lab environments right but in production environment using the same
key or mapping the same key again and again every time it's actually an hassle it's considered as a blocker so you
might want to cache it you might want to permanently add it in your puty session so you can immediately log in and start
uh using it. So here in the place where you would actually map the private key there's a quick button that actually
fixes or that actually binds your SSH to your putty instance. So we can enable SSH agent forwarding uh that will
actually bind our key to the SSH and next time when we try to log in we don't have to always go through mapping the
key and trying to log in. All right, this question, what are Solaris and AX operating systems? Are they available
with AWS? That question generally gets asked to test how familiar are you with the uh AMIs available? How familiar are
you with EC2? How familiar are you with the EC2 hardwares available? That basically tests that. Now, the first
question or the first thought that comes to your mind is well everything is available with AWS. I've seen Windows,
I've seen Ubuntu, I've seen Red Hat, I've seen Amazon uh AMIs and if I don't see my operating system there, I can
always go to marketplace and try them. If I don't find a marketplace, I can always go to community and try them. So
there a lot of AMIs available. There are a lot of operating systems available. I will be able to find Solaris and AX. But
that's not the case. Solaris and AX are not available with AWS. That's because Solaris uses a different I mean Solaris
does not support the architecture does not support public cloud currently. The same goes for AX as well. Now they run
on power CPU and not on Intel and as of now Amazon does not provide power machines. This should not be confused
with HPC which is high performance computing should not be confused with that. Now these are different hardwares,
different CPU itself that the cloud providers did do not provide yet. Another question you could get asked in
organizations that would want to automate their infrastructure using Amazon native services would be how do
you actually recover an EC2 instance or autoreover an EC2 instance when it fails? Well, we know that EC2 instances
are considered as immutable, meaning irreparable. We don't spend time fixing bugs in an OS stuff like that. You know,
once an EC2 instance crashes like it goes on a OS panic or there are various reasons why it would fail. So, we don't
have to really worry about fixing it. We can always relaunch that instance and that would fix it. But what if it
happens at 2:00 in the night? What if it happens at during a weekend when nobody's in office uh looking or
monitoring those instances? So you would want to automate that not only on a weekend or during midnights but it's
general practice good to automate it. So you could face this question how do you actually automate an EC2 instance once
it fails and the answer to that question is using cloudatch we can recover the instance. So as you see there is an
alarm threshold set in cloudatch and once the threshold is met meaning if there is an error if there is a failure
if the uh EC2 instance is not responding for a certain while we can set an alarm and once the alarm is met let's say the
CPU utilization stayed high for 5 minutes right it's not taking any new connections or the instance is not
pinging for 5 minutes or in this case it's 2 minutes it's not pinging so it's not going respond connection. So in
those cases, you would want to automatically recover that EC2 instance by rebooting the instance. All right.
Now look at this the take this action section under the action. So there we have bunch of options like recover this
instance meaning reboot the instance. So that's how we would recover. The other two options are beyond the scope of the
question but still you can go ahead and apply just like I'm going to do it. So the other option is stop the instance.
That's very useful when you want to stop instances that are having low utilizations. Nobody's using the system
as of now. You don't want them to be running and wasting the uh cloud expenditure. So you can actually set an
alarm that stops the EC2 instance that's having low utilization. So somebody was working in an instance and they left it
without or they forgot to shut down that instance and it gets I mean they will only use it again the next day morning.
So in between there could be like 12 hours that the system is running idle, nobody's using it and you're paying for
it. So you can identify such instances and actually stop them when the CPU utilization is low meaning nobody is
using it. The other one is to terminate. Let's say you want to give system to somebody temporarily and you don't want
them to hand the system back to you. All right, this is actually an idea. In other words, this is actually the
scenario. So you hand over a system to somebody and when they're done, they're done. We can actually terminate the
system. So you could instruct the other person to terminate the system when they're done and they could forget and
the instance could be running forever. Or you can monitor the system after the uh specified time is over and you can
terminate the system or best part you can automate the system termination. So you assign a system to somebody and then
turn on this cloudatch action to terminate the instance when the CPU is low for like 2 hours meaning they've
already left or CPU is low for 30 minutes meaning they've already left stuff like that. So that's possible. And
if you're getting hired for an system side architect or even on the sysop side, you could face this question. What
are the common and different types of AMI designs? There are a lot of AMI designs. The question is the common ones
and the difference between them. So the common ones are the full back AMIs and the other one is just enough OS AMI J OS
AMI and the other one is hybrid type AMIs. So let's look at the difference between them. The fullbacked AMI just
like the name says it's fully baked. It's ready to use AMI and this is the simplest AMI to deploy. Can be a bit
expensive. It can be a bit cumbersome because you'll have to do a lot of work beforehand you could use the AMI. So a
lot of planning, a lot of thought process will go into it. And the AMI is ready to use, right? You hand over the
AMI to somebody and it's ready to use. or if you want to reuse the AMI, it's already ready for you to use. So that's
full baked AMI. The other one is just enough operating system AMI just like the name says uh it has uh I mean as you
can also see in the diagram or in the picture it covers a part of the OS all bootstraps are already packed properly
and the security monitoring logging and the other stuff are configured at the time of deployment or at the time you
would be using it. So not much thought process will go in here. The only focus is on choosing the operating system and
what goes the operating system specific agents or bootstraps that goes into the operating system. That's all we worry
about. The advantage of this is it's flexible meaning you can choose to install additional softwares at the time
of deploying but that's going to require an additional expertise on the person who will be using the AMI. So that's
another overhead there. But the advantage is that it's kind of flexible. I can change the configurations during
the time of deployment. The other one is a hybrid AMI. Now the hybrid AMI actually falls in between the fully
baked AMI and just enough operating system options. So these AMIs have some features of the baked type and some
features of the just enough OS type. So as you see the security monitoring logging are packed in that AMI and the
runtime environments are installed during the time of a deployment. So this is where the strict company policies
would go into the AMI company policies like you got to log this you got to monitor this. These are the ports that
generally gets open in all the systems stuff like that. So they strictly go into the AMI and sits in an AMI format
and during deployment you have the flexibility of choosing the different runtime and the application that sits in
an EC2 instance. Another very famous question you would face in an interview is how can you recover login to an EC2
instance to which you lost the key. Well, we know that if the key is lost, we can't uh recover it. There are some
organizations that integrate their EC2 instances with an AD. That's different. All right. So you can go and reset the
password in the ad and you will be able to log in with a new password. But here the specific tricky question is you are
using a key to login and how do you recover if you have lost the key. Generally companies would have made a
backup of the key. So we can pick from the backup but here the specific question is we have lost the key
literally no backups on the key at all. So how can we log in? And we know that we can't log into the instance without
the key present with us. So the steps to recover is that make the instance use another key and use that key to login.
Once the key is lost, it's lost forever. We won't be able to recover it. You can't raise the ticket with Amazon. Not
possible. They're not going to help. It's beyond the scope. So make the instance use another key. It's only the
key that's the problem. You still have valid data in it. You got to recover the data. It's just the key that's having
the problem. So we can actually focus on the key part alone and change the key and that will allow us to login. So how
do we do it? Step-by-step procedure. So first verify the EC2 config service is running in that instance. Uh if you want
you can actually beforehand install the EC2 config in that service or you can actually make the EC2 config run through
the console. Just couple of button clicks and that'll make the EC2 config run in that EC2 instance. and then
detach the root volume for that instance. Of course, it's going to require a stop and start. Detach the
root volume from the instance. Attach the root volume to another instance as a temporary volume or it could be a
temporary instance that you've launched only to fix this issue and then log to that instance and to that particular
volume and modify the configuration file. configuration file, modify it to use the new key and then move the root
volume back to its original position and restart the instance and now the instance is going to have the new key
and you also have the new key with which you can log in. So that's how we go ahead and fix it. Now let's move on to
some product specific or S3 product specific questions. A general perception is S3 and EBS can be used
interchangeably and the interviewer would want to test your knowledge on S3 and EBS. Well, EBS uses S3. That's true.
But they can't be interchangeably used. So, you might face this question. What are some key differences between AWS S3
and EBS? Well, the differences are S3 is an object store, meaning you can't install anything in it. You can store
drive files, but you can't actually install in it. It's not a file system, but EBS is a file system. You can
install services, I mean install applications in it, and that's going to run stuff like that. And talking about
performance, S3 is much faster and EBS is uh super faster when accessing from the instance because from the instance,
if you need to access S3, you'll actually have to go out through the internet and access the S3 or S3 is an
external service, very external service. you'll have to go through or you'll have to go outside of your VPC to access S3.
S3 does not come under a VPC, but EBS comes under a VPC. It's on the same VPC. So, you would be able to use it kind of
locally compared to S3. EBS is very local. So, that way it's going to be faster. And redundancy, talking about
redundancy of S3 and EBS, S3 is replicated. The data in S3 is replicated across the data centers but EBS is
replicated within the data center. Meaning S3 is replicated across availability zones. EBS is within an
availability zone. So that way redundancy is a bit less in EBS. In other words, redundancy is higher in S3
than EBS. And talking about security of S3, S3 can be made private as well as public. Meaning anybody can access S3
from anywhere in the internet. That's possible with S3, but EBS can only only be accessed when attached to an EC2
instance, right? Just one instance can access it. Whereas S3 is publicly directly accessible. The other question
related to S3 security is how do you allow access to a user to a certain a user to a certain bucket which means
this user is not having access to S3 at all but this user needs to be given access to a certain bucket. How do we do
it? The same case applies to servers as well. In few cases, there could be an instance where a person is new to the
team and you actually don't want them to access the production service. Now, he is in the production group and by
default he or she's granted access to that server, but you specifically want to deny access to that production server
till the time he or she is matured enough to access or understand the process, understand the dos and don'ts
before they can put their hands on the production server. So how do we go about doing it? So first we would categorize
our instances. Well these are critical instances. These are normal instances. And we would actually put a tag on them.
That's how we categorize. Right? So you put a tag on them. Put a tag saying well they are highly critical. They are
medium critical and they are not critical at all. Still they're in production. Stuff like that. And then
you would pick the users who wants to or who should be or should not be given access to a certain server. And you
would actually allow the user to access or not access servers based on a specific tag. In other words, you can
use actually tags. In in the previous step, we put tags on the critical server, right? So you would define that
this user is not going to use this tag. All right? This user is not allowed to use the resources with this tag. So
that's how you would make your step forward. So you would allow or deny based on the tags that you have put. So
in this case he or she will not be allowed to servers which are tag critical servers. So that's how you
allow deny access to them. And the same goes for bucket as well. But if an organization is excessively using S3 for
their data storage because of the benefit that it provides the cost and the durability. You might get asked this
question which is organizations would replicate the data from one region to another region for additional data
durability and for having data redundancy. Not only for that they would also do that for DR purposes for
disaster recovery. If the whole region is down you still have the data available somewhere else and you can
pick and use it. Some organizations would store data in different regions for compliance reasons to provide low
latency access to their users who are local to that region stuff like that. So when companies do replication, how do
you make sure that there is consistency in the replication? How do you make sure that the replication is not failing and
the data gets transferred for sure and there are logs for that replication? This is something that the companies
would use where they're excessively using S3 and they're fully relying on the replication in running their
business. And the way we could do it is we can set up a replication monitor. It's actually set of tools that we could
use together to make sure that the cloud replication or region level replication is happening properly. So this is how it
happens. Now on this side on the left hand side we have the region one and on the right hand side we have region two
and region one is the source bucket and region two is the destination bucket. Right? So object is put in the source
bucket and it has to go directly to the region two bucket or or made a copy in the region two bucket and the problem is
sometimes it fails and there is no consistency between them. So the way you would do it is connect these services
together and create an crossrelication or cross region replication monitor that actually monitors that actually monitors
your environment. So there are cloud watch that make sure that the data is uh moved no data is failing. Again there's
cloudatch on the other end make sure that the data is moving and then we have the logs generated through cloud trail
and that's actually written in Dynamob and if there is an error if something is failing you get notified through an SMS
or you get notified through an email using the SNS service. So that's how we could leverage these tools and set up an
cross region replication monitor that actually monitors your data replication. Some common issues that company compan
companies face in VPC is that we all know that I can use route 53 to resolve an IP address externally from the
internet. But by default the servers won't connect to the other servers using our custom DNS name that it does not do
that by default. So it's actually a problem. There are some additional things that as an administrator or as an
architect or as a person who uses it, you will have to do and that's what we're going to discuss. So the question
could be VPC is not resolving the server through the DNS. You can access it through the IP but not through the DNS
name and what could be the issue and how do you go about fixing it and you will be able to answer this question only if
you have done it already. It's a quick and simple step by default VPC does not allow that's the default feature and we
will have to enable the um DNS host name resolution before. Now this is for the custom DNS not for the default DNS that
comes along. This is for the custom DNS. So we will have to enable the uh DNS host name resolution. So our we'll have
to enable DNS host name resolution. So they actually resolve. Let's say I want to connect to server
1.simplylearn.com. By default it's not allowed. But if I enable this option then I will be able to connect to server
one simply.com. If a company has VPCs in different regions and they have a head office in a central place and the rest
of them are branch offices and they are connecting to the head office for access or you know for saving data or for
accessing certain files or certain data or storing data. All right. So they would actually mimic the hubbuben spoke
topology where you have the VPC which is centrally in an accessible region a centrally accessible region and then you
would have a local VPCs or branch offices in different other regions and they get connected to the VPC in the
central location and the question is how do you actually connect the multiple sites to a VPC and make communication
happen between them by default it does not do that we know that VPCs they need to be peered between them in order to
access the resources. Let's look at this picture, right? So I have like uh customer network or branch offices in
different parts and they get connected to a VPC. That's fine. So what we have achieved is those different offices, the
remote offices, they are connecting to the VPC and they're talking but they can't connect or they can't talk to each
other. That's what we have built. But the requirement is the traffic needs to or they should be able to talk to each
other but they should not have direct connection between them which means that they will have to come and hit the VPC
and then reach the other customer network which is in Los Angeles or which is in New York right that's the
requirement so that's possible with some architecting in the cloud so that's using VPN cloud hub you look at this
dotted lines which actually allows customers or which actually allows s the uh corporate uh networks to talk to each
other through the VPC. Again, by default, it doesn't happen. Cloud hub is an architecture that we should be using
to make this happen. And what's the advantage of it? As a central office or as the uh headquarters office which is
in the VPC or headquarters data center which is in the VPC, you have control or the VPC has control on who talks to who
and what traffic can talk to. I mean what traffic can be routed to the other head office stuff like that that
centralized control is on the VPC. The other question you could get asked is name and explain some security products
and features available in VPC. Well VPC itself is an security service. It provides security service to the
application. But how do you actually secure the VPC itself? That's the question. And yes, there are products
that can actually secure the VPC or the VPC delivers those products to secure the application. Access to the VPC is
restricted through a network access control list. Right? So that's an security product in VPC. And a VPC has
security groups that protects the instances from unwanted inbound and outbound traffic. We can network access
control list protects the subnets from unwanted inbound and outbound access. And there are flow logs we can capture
in VPC that captures incoming and outgoing traffic through a VPC which will be used for later analysis as in
what's the traffic pattern what's the behavior of the traffic pattern stuff like that. So there are some security
products and features available in VPC. Now how do you monitor a VPC? VPC is a very important uh concept, very
important service as well. Everything sits in a VPC. Most of the service sits in a VPC except for Lambda and S3 and
Dynamob and couple of other services. Most of them sit in a VPC for security reason. So how do you monitor your VPC?
How do you gain some visibility on your VPC? Well, we can gain visibility on our VPC using VPC flow log. That's the basic
service. As you see, it actually captures what's allowed, what's not allowed, stuff like that. Which IP is
allowed, which IP is not allowed, stuff like that. So, we can gather it and we can use that for analysis. And the other
one is uh cloud watch and cloudatch logs. The data transfers that happen. So, this is you know who gets allowed
and who does not get allowed. I mean the flow logs is who is allowed and who's not allowed that kind of detail. And
cloudatch gives information about the data transfer. how much data is getting transferred. We can actually pick
unusual data transfers. If there is a sudden hike in the graph, there's a sudden hike and something happens at 12
on a regular basis and you weren't expecting it. There's something suspicious. It could be valid backups.
It could be a malicious activity as well. So that's how you know by looking at Cloudatch logs and Cloudatch
dashboard. Now let's talk about multiplechoice questions. When going for an interview, you might sometimes find
yourself that the company is conducting an online test based on the score. They can put you to a panelist and then they
would take it forward. So we thought we'll also include multiplechoice questions to help you better handle such
situation if you come across. All right. When you find yourself in such situation, the key to clear them is to
understand the question properly. Read between the lines. That's what they say. You know there can be like a big
paragraph with three lines or 10 lines. You really got to understand what the question is about and then try to find
answer for that question. So that's a thumb rule number one. And then the second rule is try to compare and
contrast the services mentioned or try to compare and contrast the answers. You can easily weed out one or two answers
and then you'll be left with only two answers to decide from you know. So that also helps you with time and that's that
also helps you with some precision in your answer. So number one read between the lines. Number two compare and
contrast the services and you'll be able to easily weed out the wrong ones. So let's try answering this question.
Suppose you are a game designer and you want to develop a game with a single digit millisecond latency. Which of the
following database services would you choose? So we know that the following are database services good enough. All
right. And it talks about millisecond latency. That's a key point. And the third thing is it's a game could be a
mobile game. It's a game that you are trying to design and you need millisecond latency and it has to be a
database. All right. So let's talk about the options available. RDS. RDS is a database for sure. Is it good for uh
game design? We'll come back to that. Neptune. Uh, Neptune is a graph database service in Amazon. So that's kind of out
of the equation and Snowball is actually a storage. All right, it's it's a transport medium I would say. So that's
again out of the equation. So the tie is between RDS and Dynamo DB. If we need to talk about RDS, RDS is an a platform as
a service. It provides costefficient resizable capacity but it's an SQL database meaning the tables are kind of
strict you know it's good for banking and other type of applications but not really good for anything that has to do
with gaming so the only option left is Dynamob again it's a right answer Dynamo DB is actually an flexible NoSQL
database service and it provides a singledigit millisecond latency at any scale and it's a database
at the same time. It's a key value store model database. So the right answer is Dynamo DB. All right, let's look at the
next question. If you need to perform realtime monitoring of AWS services and get actionable insights, which service
would you use? All right, let's list the services. So it talks about real time monitoring. A firewall manager, what
does it provide? Now firewall manager is not really a monitor just like the name says it's a manager it manages multiple
firewalls and AWS guard duty is an thread deduction service it does monitoring it does continuously monitor
our environment but it monitors for threats all right only threats now let's talk about cloudatch uh cloudatch is a
service that helps to track metrics It's a service that is used to monitor the environment and give us a systemwide
visibility and also it helps us to store logs. So at the moment it kind of looks like that could be the right answer. We
don't know that yet but I mean we have one more option left that's uh EBS. So what's EBS? EBS is a block storage
elastic block store. If we abbreviate EBS it's elastic block store. So all three of them are easily out of the
question. The first one is to manage. Second one is to find threats. Of course, it does monitoring. So there's I
mean if there is one relation between cloudatch and guard duty that's monitoring. So easily we can actually
find oursel slipped towards picking guard duty. But know that guard duty is only for gaining security insight but
not about gaining AWS service insight. So cloudatch is a service that helps us to get a systemwide or an AWSwide or an
accountwide. It has number of metrics we can monitor and get a very good insight of how a service is performing. Be it
CPU, be it RAM, be it network utilization, be it uh uh connection failures. Cloudatch is a service that
helps us perform a real-time monitoring and get some actionable insights on the services. All right, let's talk about
this 33rd question. As a web developer, you are developing an app especially for the mobile platform. All right, there is
a mention that this is especially for the mobile platform. So a lot of services gets filtered out mobile
platform, right? Which of the following lets you add user sign up, sign in and access control to your web and mobile
app quickly and easily. All right, so this is all about signing in to your mobile app. So if we need to read
between the lines, that's how we can read sign up or sign in into an mobile platform. All right. So we have like
four options here. Uh shield, AWS massie, AWS inspector, Amazon cognto. So let's try to weed out services which are
not relevant to it. So what's AWS shield? AWS shield is actually a service that provides a DOS mitigation or DOS
protection, denial of service protection. It's in security feature. Let's talk about the second option. In
AWS Maxi is again a security service that uses machine learning to automatically discover and classify the
data. It again talks about um security and this security is all about encrypting or saving the data does not
come close with signing up an mobile platform. All right let's talk about the other one AWS inspector. Now AWS
inspector has something to do with apps. It definitely has something to do with apps. So, kind of looks like that's
relevant as of now. So, it actually helps with um improving the security and compliance of the apps that we deploy in
the cloud. So, kind of looks like it could be because it has to do with apps. The last one, Cognto. Now, Cognto is a
service that actually lets the administrator to have control access over web and mobile apps. And it's a
service that helps us to sign up and uh uh sign in to an mobile and web app. So that very much looks like we found the
answer. So Cognto is a service that helps web app and mobile app for sign up and signing in and also gives the
administrator to have control over who is I mean access control over the web and the mobile app. Pretty much we found
it. So it's cognto. Cognto is a service that helps us to set up sign up, sign in and have access control over the users
who would be using our mobile and web app. All right, how about this question? Uh you are an ML engineer or or a
machine learning engineer who is on the lookout for a solution that will discover sensitive information that your
enterprise stores in AWS and then uses NLP to classify that data and provide business related insights. Which among
the following services would you choose? So we have a bunch of services that's going to help us achieve or one of it is
going to help us achieve the about requirement. So it's a service that deals with machine learning. You are a
machine learning engineer who's looking for a service that uh will help you to discover information at your enterprise
store. So we're talking about storage. discover information in store and then classify uh the data depending on
severity sensitivity classify the data. So which service is that? So firewall manager just like the name says it's a
manager and the AWS if we abbrevate it it's identity and access management. So it's identity and
access management nothing to do with identifying sensitive data and managing it. which are the first two is already
out of the equation. And then uh the AWS Massie, we already had a quick definition description for AWS Massie
that it's actually a security service that uses machine learning. Kind of looks like it could be it. It's a
security service that uses machine learning and it discovers and classifies the uh sensitive information. Not only
that, it does not stop there. It goes beyond and protects the sensitive data. AWS Massie kind of looks like but we
still have one more option to look at which is Cloud Homes. Cloud HM is also a security service kind of looks like that
could be the answer as well and it enables us to generate encryption keys and save the data. So kind of 50% of it
it's a security service. It encrypts helps us protect the data but AWS Maxi is right on spot. It's an machine
learning service. It helps us to classify the data and also to protect the data. So the answer for this
question will be AWS messy. So hope you kind of get it how this is going. So first we apply the thumb rule, identify
the question that's being asked, read between the lines and then try to find the service that meets your requirement.
And finding the service is by first reading out the wrong ones. recollect everything that you've learned about the
uh service and see how well that matches with those hints that you have picked up and if that doesn't match weed that out
and you'll end up with two just to to decide from at some point and then it becomes easy for you to decide. Click on
the question, submit it and then move on to the other question in your interview. All right. So, how about this one? Uh
you are a system administrator in your company which is running most of its infrastructure on AWS. You are required
to track your users and keep a look on how your users are being authenticated. All right. So this is where the problem
statement starts. All right. You need to keep track of how your users are being authenticated and you wish to create and
manage AWS users and use permissions to allow and deny their access to the AWS resources. All right. So you are to give
them permission number one. And then I mean if we put them in the right order first giving them permissions and then
tracking their usage. Let's see which of the service will help us achieve it. IM is a service that uh helps us to looking
at the uh permissions we can actually predict whether the user or the group will have servers or not. So that helps
us to get a track of who is able to use who's not able to use certain service and all that stuff. So it kind of looks
like but we have other three options left. Let's look at AWS firewall manager. Just like the name says it's
actually a firewall manager. It helps us to manage multiple firewalls. Simple as that. And shield is a service. It's a
service that's used to protect denial of service or distributed denial of service. An API gateway is a service
that makes it easy for developers to create, publish, maintain and monitor and secure API. So I mean it's
completely on the API side. Very less on user and how you authenticate your user. We can get that by looking at the name
itself. Right? If you abbreviate it or if you if you try to find a definition for the name API gateway you would get
it. It has to do with API. But if you abroate AWS IM it's identity and access management pretty much meets the
requirement for the problem statement about it's AWS identity and access management. That's the right answer. All
right let's look at this one. If you want to allocate various private and public IP address in order to make them
communicate with the internet and other instances, you will use this service. Which of the following is this service?
So it talks about using public and private IP address. So this service uses IP address and then this service helps
us to allow and deny connections to the internet and to the other instances. So you get the question is it let's pick
the service that helps us achieve it. Route 53. Route 53 is actually a DNS service, right? It's not a service
that's used to allow or deny. No, it does not do that. VPC VPC uses public and private IP address. Yes. So it kind
of looks like a VPC helps us uh to allow I mean the security in VPC the security group the network access control list in
a VPC that routing table in a VPC that actually helps us to allow or deny a connection to a particular IP address or
to a particular service within the VPC or outside of the VPC. So as of now it kind of looks like it could be but uh
let's look at the other services. What if if we find a service that closely matches to the above requirement than
the Amazon VPC gateway API gateway we know that it's a managed service that makes it easy for developers to create
publish maintain and monitor APIs and secure API that has completely to do with API not with IP CloudFront we know
about CloudFront that it's an content delivery network and it provides global distribution of uh servers
where our content can be cached. It could be video or or bulk media or anything else. They can be cached
locally. So users can easily access them and download them easily. All right. So that's CloudFront. Now at this point
after looking at all four, it looks like VPC is the right answer and in fact VPC is the right answer. VPC has public IP
address. VPC can help us with private IP address. VPC can be used to allow deny connection based on the security group,
access control list and routing table it has. So that's right answer is VPC. All right. How about this one? This platform
as a service or platform as a DB service provides us with a costefficient and resizable capacity while automating time
consuming administrative task. So this question is very clear. It's a DB service we got to look for and it's a
service that can provide automating some of the time consuming task. It has to be resizable at the same time. So let's
talk about Amazon relational database. It's a database kind of matches the requirement. We can resize it as and
when needed. All right. Looks like it's a fit as of now. It actually automates some of the time-conuming work. Looks
like it's a fit as of now. Let's move on to elastic cache and then try to see if uh that matches the definition that uh
we've figured out about elastic cache. It's actually a caching service. It's again an in-memory data store which
helps in achieving high throughput and low latency in memory data store. So it's not a full-blown database and it
does not come with um any Amazon provisioned automation in it for automating any of the administration
task. No, it does not come up with anything like that. Yeah, we can resize the capacity as and when needed but
automation it's not there yet and moreover it's not a database. So that's out of the equation. VPC is not a
resizable one. you know once we have designed VPC it's fixed it can't be resized so that's out of the equation
and uh Amazon glacier glacier is a storage but not a database right so that's again out of the equation so the
tie is kind of between Amazon relational database service and Amazon elastic cache because they both aid the database
service but elastic cache is not a full-blown database it actually helps the database but it's not a full-blown
database so it's Amazon relational database. So that's the one which is a platform as a service. It's the one
which can be resized. It's the one which can be used to automate the time consuming administrative tasks. All
right, let's talk about this one. Uh which of the following is a means for accessing human researchers or
consultants to help solve a problem on a contractual or a temporary basis? All right, let's read the question again.
Which of the following is a means for accessing human researchers or consultant to help solve problems on a
contractual or a temporary basis? It's like assigning task or hiring AWS experts for a temporary job. So let's
try to find that kind of service in the four services that are listed. Amazon elastic map reduce. Map produce is
actually an framework service that makes it easy and cost effective to analyze large amount of data but that has
nothing to do with accessing human researchers. All right, let's talk about mechanical. It's a web service that
provides a human workforce. That's the definition for it. Uh for example, automation is good but not everything
can be automated. For something to qualify for automation, it has to be an repeative task. A one-time task can't be
automated or the time and money that you would be spending in automation is not worth it. Instead, you could have done
it manually. So, that does not qualify for automation. And anything that requires intelligence, right? Anything
that's a special case, right? Automation can do repetitive task. Automation can do precise work, but it has to be
repetitive task. The scenario, you know, it should have been there already. Only then that can be executed. But if it's a
new scenario and it requires uh appropriate addressing then it requires human task. So we could hire researchers
and consultants who can help solve a problem using Amazon uh mechanical turk. The other two are already out of the
equation. Now dev pay is actually a payment system through Amazon and multiffactor authentication as it says
it's an authentication system. So the right answer is Amazon Mechanical Turk. All right, this sounds interesting.
Let's look at this one. This service is used to make it easy to deploy, manage, and scale containerized applications
using Kubernetes on AWS. Which of the following is this AWS service? So, it's a service to deploy, manage, and scale
containerized applications. So, it deals with containers. It also should have the ability to use Kubernetes, which is an
container orchestration service. All right. The first one, Amazon Elastic Container Service. kind of looks like
it's the one the name itself has the word and the relation we're looking for elastic uh container service. So this
container service is an highly scalable high performance container orchestration service. Let's look at the other one AWS
batch. It's a service that enables IT professionals to schedule and execute uh batch processing. I mean the name itself
says that that's meant for batch processing. Elastic beantock that's another service that helps us to deploy
manage and scale but it helps us with EC2 instances not with containerized uh instances so that's again out of the
equation would light be a good time for elastic container service what's light sale now lightsale is a service it's
called as virtual private server without a VPC it's called as a virtual private uh server it comes with a predefined uh
compute, storage, networking capacity. It's actually a server, not a container, right? So at this point that also
becomes out of the equation. So it's Amazon elastic container service. That's the one that helps us to easily deploy,
manage, scale container services and it helps us orchestrate the containers using Kubernetes. All right. How about
this one? All right. This service lets us to run code without provisioning or managing servers. So no servers run
code. Select the correct service from the below option. All right. So no servers but we should be able to run
code. Amazon EC2 autoscaling. Easy to autoscaling. EC2 is elastic compute cloud which is a server and autoscaling
is a service that helps us to achieve scaling the server. So that's the definition for it. Could be that's out
of the equation. AWS Lambda. Now, Lambda is a service. It's actually an eventdriven serverless computing
platform and Lambda runs code in response to the event that it receives and it automatically manages the compute
resource that's required for that code. As long as we have uploaded a code that's correct and set up events
correctly to map to that code, it's going to run seamlessly. So, that's about Lambda. It kind of looks like it
could be the answer because Lambda runs code. uh we don't have to manage servers it uh manages servers by itself but we
can't conclude as of now we have other two service to talk about AWS batch all right batch is a service that enables ID
professionals to run batch job we know that and about inspector Amazon inspector it's actually a service that
helps us to increase and identify any security issues and align our application with compliance well that's
not the requirement in the question the requirement in the question was run code without provisioning a server and uh
without any more space for confusion AWS Lambda is a service or is the service that runs code without provisioning and
managing services right the right one would be AWS Lambda I'm very excited that uh you're watching this video and
I'm equally glad that uh we were able to provide you a second part in AWS interview questions all right let's get
started so in an environment where there's lot of automation infrastructure automation, you'll be posted with this
question. How can you add an existing instance to a new autoscaling group? Now, this is when you are taking an
instance away from the autoscaling group to uh troubleshoot uh to fix a problem, you know, to look at logs or if you have
suspended the autoscaling, you know, you might need to read that instance to the autoscaling group. Only then it's going
to take part in it, right? Only then the autoscaling is going to count it as part of it. It's not a straight uh procedure.
You know, when you remove them, you know, it doesn't get automatically readded. I've had worked with some
clients when their developers were managing their own environment. They had problems adding the instance back to the
autoscaling group. you know irrespective of what they tried the instance was not getting added to the autoscaling group
and whatever they fix that they were provided or whatever fix that they have provided were not you know getting
encountered in the autoscaling group so like I said it's not a straight you know a click button procedure there are ways
we'll have to do it so how can you add an existing instance uh to the autoscaling group there are few steps
that we need to follow so the first one would be to under the EC2 instance console, right? Under the uh instance
under actions in specific, you know, there's an option called attach to autoscaling group, right? If you have
multiple autoscaling groups in your account or in the region that you're working in, um then you're going to be
posted with the different autoscaling groups that you have in your account. Let's say you have five autoscaling
groups for five different application. You know, you're going to be posted with five different autoscaling uh groups.
And then you would select uh the autoscaling the appropriate autoscaling group and attach the instance to that
particular autoscaling group. While adding to the autoscaling group, if you want to change the instance type, you
know that's possible as well. Sometimes uh when you want to add the instance back to the autoscaling group, there
would be requirement that you change the instance type uh to a better one to a better family to the better instance
type. You could do that at that time. And after that you are or you have completely added the instance back to
the autoscaling group. So it's actually an sevenst step uh process adding an instance back to the autoscaling group
in an environment where they're dealing with migrating the instance or migrating an application or migrating an instance
migrating an VM into the cloud. You know if the project that you're going to work with deals with a lot of migrations uh
you could be posted this question. What are the factors you will consider while migrating to Amazon Web Services? The
first one is cost. Is it worth moving the uh instance to the cloud given the additional bills and whizzels features
available in the cloud? Is this application going to use all of them? Is moving into the cloud beneficial to the
application in the first place? You know, beneficial to the users who will be using the application in the first
place. So, that's a factor to think of. So this actually includes uh know cost of the infrastructure and the ability to
match the demand and supply transparency. Is this application in high demand? You know is it going to be
a big loss if the application becomes unavailable for some time. So there are few things that needs to be considered
before we move the application to the cloud. And then if uh the application does the application needs to be
provisioned immediately, is there an urge? Is there an urge to provision the application immediately? That's
something that needs to be considered. If the u application requires to go online, if the application needs to hit
the market immediately, then we would need to move it to the cloud because in on premises procuring or buying an
infrastructure, buying the bandwidth, buying the switch port, you know, buying an instance, you know, buying the
software, buying the license related to it, it's going to take time at least like 2 weeks or so before you can bring
up an server and launch an application in it. Right? So if the uh application cannot wait you know waiting means uh
you know workforce productivity loss is it? So we would want to immediately launch instances and put application on
top of it. In those case if your application is of that type if there is a urge in making the application go
online as soon as possible then that's a candidate for moving to the cloud. And if the application or if uh the uh the
software or if the product that you're launching it requires hardware, it requires an updated hardware all the
time. That's not going to be possible in on premises. We we try to deal with legacy uh infrastructure all the time in
on premises. But in the cloud, they're constantly upgrading their hardware only then they can keep themselves up going
in the market. So they're constantly the cloud providers are constantly updating their hardware. And if you want to be
benefited of your application wants to be benefited by the constant upgrading of the hardwares making sure the
hardware is as latest as possible uh the software version the licensing is as latest as possible then that's a
candidate to be moved to the cloud and if the application does not want to go through any risk. If the application is
very sensitive to failures, if the application is very much tagged to the revenue of the company and you don't
want to take a chance in um you know seeing the application fail and you know seeing the revenue drop then that's a
candidate for moving to the cloud and business uh agility you know moving to the cloud at least half of the
responsibility is now taken care by the provider in this case it's Amazon at least half of the responsibility is
taken care by them like if the hardware fails, Amazon make sure that they're fixing the hardware immediately. And uh
notifications, you know, if something happens, you know, there are immediate notifications available that we can set
it up and make ourel aware that something has uh broken and we can immediately jump in and fix it. So you
see there are the responsibility is now being shared between Amazon and us. So if you want to get that benefit for your
application, for your organization, for the product that you're launching, then uh it needs to be moved to the cloud. So
you can get that benefit from the cloud. The other question you could get asked is what is RTO and RPO in AWS? They are
essentially disaster recovery terms. When you're planning for disaster recovery, you cannot avoid planning
disaster recovery without talking about RTO and RPO. Now what's the RTO? What's the RPO in your environment? Or how do
you define RTO? How do you define RPO? Are some general questions that get asked. RTO is recovery time objective.
RTO stands for the maximum time the company is willing to wait for the recovery to happen or for the recovery
to finish when an disaster strikes. So RTO is u in the future right how much time it's is it going to take to fix and
bring everything to normal. So that's RTO. On the other hand, RPO is recovery point objective, which is the maximum
amount of data loss your company is willing to accept as measured in time. RPO always refers to the backups, the
number of backups, the uh the frequency of the backups, right? Because when an outage happens, you can always go back
to the latest backup, right? And if the latest backup was data storage, right? So RPO is the acceptable amount. If the
company wants uh less RPO, RPO is 1 hour, then you should be planning on taking backups every 1 hour. If RPO is
12 hours, then you should be planning on uh taking backups every 12 hours. So that's how RPO and RTO, you know, helps
disaster recovery. The fourth question you could get asked is if you'd like to transfer huge amount of data, which is
the best option among snowball, snowball edge and snowmobile? Again this is a question that get asked if u the company
is dealing with lot of data transfer into the cloud or if the company is dealing with migrating data into the
cloud. I'm I'm talking about huge amount of data data in pabytes. Snowball and all of the snowball series deals with
pabyt sized data migrations. So there are three options available as of now. AWS Snowball is an data transport
solution for moving high volume of data into and out of a specified AWS region. On the other hand, AWS Snowball Edge
adds additional computing uh functions. Snowball is simple storage and movement of data and Snowball Edge has a compute
function attached to it. Snow Mobile on the other hand is an xabyte scale migration service that allows us to
transfer data up to 100 pabytes. That's like 100,000 terabytes. So depending on the size of data that we want to
transfer from our data center to the cloud, we can hire we can rent any of these three services. Let's talk about
some cloud formation questions. This is a classic question. How is AWS cloud formation different from AWS elastic
beanto? You know from the surface they both look like the same. You know you don't go through the console
provisioning resources. You don't you know you don't go through CLI and provision resources. Both of them
provision resources through click button right but underneath they are actually different services. They support they
aid different services. So knowing that is going to help you understand this question a lot better. Let's talk about
the difference between them and this is what you will be explaining to the interviewer or the panelist. So the
cloud formation the cloud formation service helps you describe and provision all the infrastructure resources in the
cloud environment. On the other hand, elastic beantock provides an simple environment to which we can deploy and
run application. Cloud formation gives us an infrastructure and elastic beanto gives us an small contained environment
in which we can run our application. and cloud formation supports uh the infrastructure needs of many different
types of application like uh the u enterprise application the legacy applications and any new modern
application that uh you want to have in the cloud. On the other hand the elastic beanto it's a combination of developer
tools they are tools that helps manage the life cycle of a single application. So cloud formation in short is managing
the infrastructure as a whole and elastic beanto in short is managing and running an application in the cloud. And
if the company that you're getting hired is using uh cloud formation to manage their infrastructure using or if they're
using infrastructure or any of the infrastructure as a code uh services then you would definitely face this
question. What are the elements of an AWS cloud formation uh template? So it has four or five basic u elements right
and the template is in the form of JSON or in YL format right so it has parameters it has u outputs it has data
it has resources and then the format or the format version or the file format version for the cloud formation
template. So parameter is nothing but um it actually lets you to specify the type of EC2 instance uh that you want the
type of RDS uh that you want right so EC2 is an umbrella RDS is an umbrella and parameters within that EC2 and
parameters within that RDS are the specific details of the EC2 or the specific details of the RDS service. So
that's what parameter is in a cloud formation template. And then the element of the cloud formation template is
outputs. For example, if you want to output the name of an S3 bucket that was created, if you want to output the name
of the EC2 instance, if you want to output the name of some resources that have been created, instead of looking
into the template, instead of u you know navigating through in the console and finding the name of the resource, we can
actually have them outputed uh in uh the result section. So you can simply go and look at all the resources created
through the template in the output section and that's what output values or output does in the cloud formation
template. And then we have resources. Resources are nothing but what defines what are the cloud components or cloud
resources that will be created through this cloud formation template. Now EC2 is a resource, RDS is a resource and S3
bucket is a a resource. Elastic load balancer is a resource and NAT gateway is a resource. VPC is a resource. So you
see all these components are the resources and the resource section in the cloud formation defines what are the
AWS cloud resources that will be created through this cloud form template. And then we have version. Version actually
identifies the capabilities of the template. You know we just need to make sure that it is of the latest version
type. And the latest version is um 0909 uh 2010. That's the latest version number. You'll be able to find that on
the top of the cloud formation template and that version number defines the capabilities of the cloud for template.
So just need to make sure that it's the latest all the time. Still talking about cloud form. Uh this is another classic
question. What happens when one of the resource in a stack cannot be created successfully? Well, if uh the resource
in a stack cannot be created, the cloud formation automatically rolls back and terminates all the resources that was
created using the cloud formation template. So whatever resources that were created through the cloud formation
template from the beginning, let's say we have created like 10 resources and the 11th resource is now failing, cloud
formation will roll back and delete all the 10 resources that were created previously. And this is very useful uh
when the cloud formation cannot you know go forward cloud formation cannot create additional resources because we have
reached the elastic ip limits elastic IP limit per region is five right and if you have already used five IPs and your
cloud form is trying to buy three more IPs you know we've hit the soft limit till we fix that with Amazon cloud form
will not be able to you know launch additional you know resources and additional IP IP is so it's going to
cancel and roll back everything. That's true with a missing EC2 AMI as well. If an AMI is included in the template and
but the AMI is not actually present, then cloud form is going to search for the AMI and because it's not present,
it's going to roll back and delete all the resources that it uh created. So that's what cloud form does. It simply
rolls back all the resources that it created. I mean if it sees a failure it would simply roll back all the resources
that it created and this feature actually simplifies the system administration and layer solutions built
on top of AWS cloud formation. So at any point we know that there are no orphan resources in the in in our environment
you know because something did not work or because there was an you know cloud formation executed some there are no
orphan resources in our account at any point we can be sure that if cloud formation is launching a resource and if
it's going to fail and it's going to come back and delete all the resources it's created so there are no orphan
resources in our account. Now let's talk about some questions in elastic block store. Again, if the environment deals
with lot of automation, you could be thrown this question. How can you automate EC2 backup using EBS? It's
actually a six-step process. To automate the EC2 backups, we'll need to write a script to automate the below steps uh
using AWS API. And these are the steps that should be uh found in the scripts. first to get the list of instances and
then and then the script that we are writing should be able to connect to AWS using the API and list the Amazon EBS
volumes that are attached locally to the instance and then it needs to list the snapshots of each volume. make sure the
snapshots are uh present and it needs to assign a retention period for the snapshot because over time the snapshots
are going to be uh invalid right once you have some 10 latest uh snapshots any snapshot that you have taken before that
10 becomes invalid because you have captured the latest and 10 snapshot coverage is enough for you and then uh
the fifth point is to create a snapshot of each volume create a new snapshot of each volume and then delete the old
snapshot. Anytime a new snapshot gets created, the oldest snapshot in the list needs to go away. So, we need to include
options, we need to include scripts in our script, lines in our script that make sure that it's deleting the older
snapshots which are older than the retention period that we are mentioning. Another question that you could see in
the interview, be it a return interview, beat it online interview or beat an telephonic or face toface interview, is
what's the difference between EBS and instance store. Let's talk about EBS first. EBS is kind of permanent storage.
The data in it can be restored at a later point. When we save data in EBS, the data lives even after the lifetime
of the EC2 instance. For example, we can stop the instance and the data is still going to be present in EBS. We can move
the EBS from one instance to another instance and the data is simply going to be present there. So EBS is kind of
permanent storage when compared to instance. On the other hand, instance store is temporary storage and that
storage is actually physically attached uh to the host of the machine. EBS is an external storage. An instance store is
locally attached uh to the instance or locally attached to the host of the machine. We cannot detach an instance
store from one instance and attach it to another. But we can do that with EBS. So that's a big difference. One is
permanent data and uh another one is uh EBS is permanent. Instance store is u volatile data and um instance store.
With instance store, we won't be able to detach the storage and u attach it to another instance. And another feature of
instance store is data in an instance store is lost if the disk fails or the instance is stopped or terminated. So
instance store is only good for storing cache data. If you want to store permanent data then we should think of
using EBS and not instance store. While talking about storage on the same lines, this is another classic question. How
can you take backups of EFS like EBS? And if you can take backup, how do you take that backup? The answer is yes, we
can take EFS to EFS backup solution. EFS does not support snapshot like EBS. EFS does not support snapshot. Snapshot is
not an option for EFS, elastic file system, right? We can only take backup from one EFS to another EFS. And this
backup solution is to recover from unintended changes or deletions of the EFS. And this can be automated, right?
Any data that we store in EFS can be automatically replicated to another EFS. And once this EFS goes down or gets
deleted or data gets deleted or you know the whole EFS is for some reason interrupted or deleted we can recover
the data from we can use the other EFS and bring the application to consistency and to achieve this it's not an one-step
configuration it's a cycle there are series of steps that's involved before we can achieve EFS to EFS uh backup. The
first thing is to sign in to the AWS management console and under EFS or click on EFS to EFS restore button from
the services list. And from there we can use the region selector in the uh console navigation bar to select the
actual region in which we want to work on. And from there ensure that we have selected the right template. You know
some of the templates would be you know EFS to EFS backup granular backups, incremental uh backups right so there
are some templates the kind of backups that you want to take. Do you want to take granular? Do you want to take
increment backups stuff like that? And then create a name uh to that solution. The kind of backup that we have created
and finally review all the configurations that you have done and click on save. And from that point
onwards the data is going to be copied. And from that point onwards any additional data that you put uh is going
to copy it and replicate it. Now you have an EFS to EFS backup. This is another classic question in companies
which deals with u a data management. There are easy options to create snapshots but uh deleting snapshots is
uh not always an you know click button or an singlestep configuration. So you might be facing a question like how do
you autodelete old snapshots and the procedure is like this. Now as best practice we will take snapshots of EBS
volume to S3. All snapshots get stored in S3. We know that now. And uh we can use AWS ops automator to automatically
handle all snapshots. The ops automator service. It allows us to create, copy, delete EBS snapshots. So there are cloud
formation templates available for AWS ops automator. And this automator uh template will scan the environment and
it would take snapshots. It would you know copy the snapshot from one region to another region if you want. I know if
you're setting up a DR environment and not only that based on the retention period that we uh create it's going to
delete the snapshots which are older than the retention period. So life or managing snapshot is made a lot easier
because of this ops automator cloud formation template. Moving into questions in elastic load balancer. This
again could be an a question in the interview. What are the different types of load balancers in AWS and what's
their use case? What's the difference between them? And as of now as we speak there are three types of load balancers
which are available in AWS. The first one being application load balancer. Just like the name says, the application
load balancer works on the application layer and deals with HTTP and HTTPS uh request and uh it it also supports
pathbased routing for example uh simplylearn.com/ some web page simply.com/ another website. So it's
going to direct the path based on the slash value that you give in the URL. So that's path based uh routing. So it
supports that and not only that it can support port based colon 8080 col 80881 or col 8090 you know based on that port
also it can take routing decision and that's what application load balancer does. On the other hand we have network
load balancer and the network load balancer makes routing decisions at the transport level. It's faster because it
has very less thing to work on. It works on lower OSI layer. It works on a lower layer. So it has very less information
to work with than compared with application layer. So comparatively it's lot faster and it handles millions of
requests per second. And after the load balancer receives a connection, it selects a target group for the default
rule using the flow hash routing algorithm. It does simple routing, right? It does not do path- based or
portbased routing. It does simple routing and because of it, it's faster. And then we have classic load balancer
which is uh kind of expiring as we speak. Amazon is discouraging people using classic load balancer. But there
are companies which are still using classic load balancer. They are the ones who were the first one to step in to
Amazon when classic load balancer was the first load balancer or the only load balancer available at that point. So it
supports uh HTTP, HTTPS, uh TCP SSL protocol and it has a fixed relationship between load balancer port and the
container port. So initially we only had classic load balancer and then um at after some point Amazon said instead of
having one load balancer address all type of traffic we're going to have two load balances called as the child from
the classic two load balancer and one is going to specifically address the application requirement and one is going
to specifically address the network requirement and let's call it as application load balancer and network
load balancer. So that's how now we have two different load balancers. Talking about load balancer, another classic
question could be what are the different uses of the various load balancer in AWS elastic load balancing. There are three
types of load balancer. We just spoke about it. Application load balancer is used if we need a flexible application
management and TLS termination and network load balancer if we require extreme performance and the load
balancing should happen on based on static IPs for the application. And classic load balancer is an old load
balancer which is for people who are still running their environment from EC2 classic network. Now this is an older
version of VPC or this is what was present before VPC was created. EC2 classic network is what was present
before EC2 was created. So they are the three types and they are the use cases of it. Let's talk about some of the
security related questions you would face in the interview. When talking about security and firewall in AWS, we
cannot avoid discussion talking about WFT, web application firewall. And you would definitely see yourself in this
situation where you've been asked how can you use AWS WFT in monitoring your AWS applications. WAFT or web
application firewall protects our web application from common web exploits and WF helps us control which traffic source
your application should be allowed or blocked. Which traffic from a certain source know which source or which
traffic from a certain source should be allowed or blocked your application. With WF we can also create custom rules
that blocks common attack patterns. you know if it is a banking application it has a a certain type of attacks and if
it is simple uh data management data storage application it has uh I mean content management application it has a
separate type of attack. So based on the application type we can identify a pattern and create rules that would
actually block that attack based on the rule that we create and w can be used for three cases. You know the first one
is allow all requests and then block all request and count all request for a new policy. So it's also an monitoring and
management service which actually counts all the policies or counts all the requests that matches a particular
policy that we created. And some of the characteristics we can mention in AWS WF are the origin ips and the strings that
appear in the request. We can allow block based on origin IP. Allow block based on strings that appear in the
request. We can allow block or count based on the origin country length of the request. Yeah, we can block and
count the presence of malicious scripts uh in an connection. You know, we can count the request headers or we can
allow block a certain request header and we can count the presence of malicious SQL code in a connection that we get and
that want to reach our application. Still talking about security, what are the different AWS IM categories we can
control? Using AWS IM, we can do the following. One is create and manage IM users and once the user database gets
bigger and bigger we can create and manage them in groups and in IM we can use it to manage the security
credentials kind of setting the complexity of the password you know setting additional authentications you
know like MFA and uh you know rotating the passwords you know resetting the password there are few things we could
do with AM and finally we can create policies that actually grants access to AWS services and uh resources. Another
question you will see is what are the policies that you can set for your users uh password. So some of the policies
that we can set for the user password is the minimum length or you know the complexity of the password by at least
having one number or one special characters in the password. So that's one and then the requirement of a
specific character types including you know uppercase lowerase number and non-alphabetic characters. So it becomes
very hard for somebody else to guess what the password uh would be and and try to hack them. So we can set the
length of the password, we can set the complexity in the password and then we can set an automatic expiration of the
password. So after a certain time the user is forced to create a new password. So the password is not stale, old and
easy to guess in the environment. And we can also set settings like the user should contact the admin. I mean when
the password is about to expire. So you know you can get a hold of how the user is setting their password. Is it uh
having good complexity in it? Is it meeting company standards or there are few things that we can control and set
for the users when the users are setting or recreating the password. Another question that could be posted in an
interview sort of to understand your understanding of uh IM is what's the difference between an IM role and an IM
user. Let's talk about IM user. Let's start small and then go big or let's start simple and then talk about the
complex one. The IM user has a permanent long-term credential and it's used to interact directly with AWS services. And
on the other hand, IM role is an IM entity that defines a set of permissions for making AWS service request. So IM
user is an permanent credential and role are temporary credentials and IM user has full access to all AWS IM
functionalities and with role trusted entities such as IM users application or AWS services uh assume the role. So when
an IM user is given an a permission you know it sticks within the IM user but with roles we can give permissions to
applications we can give permissions to users in the same account in a different account the corporate ID we can give
permissions to uh EC2 S3 RDS VPC and lot more role is wide and IM user is uh is not so wide you know it's very
constrained only for that IM AM user. Let's talk about manage policies in uh AWS. Managed policies there are two
types you know customer managed and Amazon managed. So manage policies are IM resources that express permissions
using the AM policy language. Now we can create policies, edit them, manage them, manage them separately from the IM user
group and roles which they are attached to. Say they are something that we can do to managed policies if it is customer
managed and uh we can now update policy in one place and the permissions automatically extend to all the attached
entries. So I can have like three services four services point to a particular policy and if I edit that
particular policy it's going to reflect on those three or four services. So anything that I allow is going to be
allowed for those four services. Anything that I denied is going to be denied for uh the four services. Imagine
what would be without the IM managed policy. We'll have to go and specifically allow deny on those
different instances four or five times depending on the number of instances that we have. So like I said there are
two types of managed policies. One is managed by us which is customer managed policies and then the other is managed
by AWS which is AWS managed policy. This question can you give an example of an IM policy and a policy summary. This is
actually to test how wellversed are you with the AWS console. The answer to their question is look at the following
policy. This policy is used to grant access to add, update and delete objects from a specific folder. You know in this
case name of the folder is uh example folder and it's present in a bucket called example bucket. So this is an AM
policy. On the other hand, the policy summary is a list of access level resource and conditions for each service
defined in a policy. So IM policy is all about one particular resource and the policy summary is all about multiple
resources. With IM policy, it was only talking about S3 bucket and one particular S3 bucket. Here it talks
about cloud formation template, cloudatch logs, EC2, elastic beantock services summary, summary of resources
and the permissions and policies attached to them. That's what policy summary is all about. Another question
could be like this. What's the use case of IM and how does IM help your business? Two important or primary work
of IM is to help us manage IM users and their access. It provides a secure access to multiple users uh to their
appropriate AWS resources. So that's one it does. And the second thing it does is manage access for federated users.
Federated users or nonM users and uh through IM we can actually allow and provide secured access to resources in
our AWS account to our employees without the IM user. Now they could be authenticated using the active
directory. They could be authenticated using the Facebook credential, Google credential, Amazon credential and a
couple of other credentials, third party uh identity management, right? So we could actually trust them and we could
give them access to our account uh based on the trust relationship that we have built with the other identity systems.
All right. So two things one is manage users and their access uh for uh manage I am user and their access in our AWS
environment and second is manage access for federated users who are nonim users and more importantly IM is a free
service and with that uh we'll only be charged for the use of the resources not for the IM username and password that we
create. All right, let's now talk about some of the questions in uh Route 53. One classic question that could be asked
in an interview is what is the difference between latency based routing and u geodns or geo based DNS routing.
Now the geo based DNS routing takes routing decisions on the basis of the geographic location of the request and
on the other hand the latency based routing utilizes latency measurements between uh networks and uh data centers.
Now latency based routing is used where you want to give your customers the lowest latency as possible. So that's
when we would use latency based routing. And on the other hand, a geo based routing is uh when we want to direct
customers to different websites based on the country they are browsing from. You know, you could have uh you know, two
different or three different websites for the same URL. You know, take Amazon the shopping website for example. When
we go to amazon.com from in the US, it directs us to the US web page where the products
are different, the currency is different, right? And the flag and and a couple of other advertisements that
shows up are different. And when we uh go to amazon.com from India, it gets directed to the Amazon.com Indian site
where again the currency, the product and the advertisements, they're all different, right? So depending on the u
uh country they're trying to browse, if you want to direct customers to two or three different websites, we would use
geo based routing. Another use case of geobased routing is if you have a compliance that um you should handle all
the DNS requests or if you should handle all the uh request you know from a country within the country then you
would do geo based routing. Now you wouldn't direct the customer to a server which is in another country. All right,
you would direct the customer to a server which is very local to them. That's another use case of geo based
routing. And like I said for latency based routing, the whole goal or aim is to achieve minimum end user latency. If
you are hired for the architect role and that requires working lot on the DNS, then you could be posted with this
question. What is the difference between domain and a hosted zone? A domain is actually a collection of u data
describing a self-contained administrative and technical unit on the internet. Right? So for example, you
know, simplylearn.com is actually a domain. On the other hand, hosted zone is actually an container that holds
information about how you want to route traffic on the internet to a specific domain. For example, lms.simplylearn.com
simplylearn.com is an hosted zone whereas simplylearn.com is an uh domain. So in other words in hosted zone uh you
would see the domain name plus an a prefix uh to it. LMS is a prefix here. FTP is a prefix. Mail.simplyon.com is a
prefix. So that's how you would see a prefix in hosted zones. Another question uh from another classic question from
Route 53 would be how does Amazon Route 53 provide high availability and low latency. The way Amazon Route 53
provides high availability and low latency is by globally distributed DNS service. Amazon is a global service and
they have DNS services globally. Any customer doing a query from different parts of the world, they get to reach an
DNS server which is very local to them and that's how it provides low latency. Now this is not true with all the DNS
providers. There are DNS providers who are very local uh to a country who are very local to a continent. So they don't
they generally don't provide low latency service, right? It's always high latency. It's low latency for local
users, but anybody browsing from a different country or a different continent, it's going to be high latency
for them. But that's not again true with Amazon. Amazon is a globally distributed DNS provider. It has DNS servers
globalwide. And like I said, it has optimal locations. It has got global servers or in other words, it has got
servers around the globe, different parts in the globe. And that's how they are able to provide high availability.
And uh because it's not uh running on just one server but on many servers they provide high availability and low
latency. If the environment that you're going to work on is going to take a lot of configuration backups environmental
backups then you can expect questions in AWS config. A classic question would be how does AWS config work along with AWS
cloudt trail. AWS cloud trail actually records user API activity on the account and um you know any HTTP HTTPS access or
any any sort of access uh you know that's made to the cloud environment that's recorded in the cloud trail in
other words any API calls the time is recorded the type of call is recorded and what was the response given was it a
failure was it successful they also get recorded in cloud trail it's actually a log it actually records uh the activity
in your cloud environment. On the other hand, config is an point in time configuration details of your resources.
For example, at a given point, what are all the resources that were present in my environment? What are all the uh
resources or what are the uh configuration in those resources at a given point? They get captured in AWS
config. Right? So with that information you can uh always answer the question what did my AWS resource look like at a
given point in time. That question gets answered when we use AWS config. On the other hand with cloud trail uh you can
answer the question I mean by looking at the cloud trail or with the help of cloud trail you can easily answer the
question who made an API call to modify this resource that's answered by uh cloud trail. And with the cloud trail we
can detect if a security group was incorrectly configured and who did that configuration. Let's say uh there
happened to be a downtime and you want to identify let's say there happened to be a downtime and you want to identify
who made uh that uh change in the environment you can simply look at cloud trail and find out who made the change.
And if you want to look at how the environment looked like before the change you can always look at AWS
config. Can AWS configure or AWS config aggregate data across different AWS accounts? Uh yes it can. Now this
question is actually to test whether you have used AWS config or not. I know some of the services are very local is it?
Some of these services are availability zone specific some of them are regional specific and some of them are uh global
uh services in Amazon. And though some of the services are region services, you still can do some changes, you know, add
some configuration to it and collect regional data in it. For example, S3 is a regional service, but still you can
collect logs from all of the regions into an S3 bucket in one particular region. That's possible. And cloud trail
is an cloudatch is an regional service. But still you can with some changes to it with with some adding permissions to
it. You can always monitor the cloudatch that belongs to cloudatch logs that belongs to other regions. You know,
they're not global by default, but you can do some changes and make it global. Similarly, AWS config is a service
that's a region-based service, but still you can make it act uh globally. We can aggregate uh data across different
region and different accounts in an AWS uh config and deliver the updates uh from different accounts to one S3 bucket
and can access it from there. AWS config also works or integrates seamlessly with SNS topic. So you know anytime there is
a change anytime a new data gets collected you can always notify yourself or notify a group of people about uh the
new log or the new config or new edit that happened in the environment. Let's look at some of the database questions.
You know database should be running on reserved instances. All right. So whether you know that fact or not the
interviewer wants to understand how well you know that fact by asking this question. How are reserved instances
different from on demand uh DB instances? Reserved instances and ondemand instances are exactly the same
when it comes to their function but they only differ based on how they are built. Reserved instances are purchased uh for
one year or threeear reservation and in return we get a very low uh per our pricing because we're paying upfront.
It's generally said that uh reserved instance is 75%age cheaper than ondemand instance and Amazon gives you that
benefit because you know you're committing for one year and sometimes you're paying in advance for the whole
year. On the other hand, on demand instances are built on an hourly hourly price. Talking about autoscaling, how
will you understand the different types of autoscaling? The interviewer might ask this question. Which type of scaling
would you recommend for RDS and why? There are two types of scaling as you would know now vertical and horizontal.
And in vertical scaling we can vertically scale up the master database with uh a couple of uh clicks. All
right. So that's vertical scaling. Vertical scaling is keeping the same node and making it uh bigger and bigger.
If previously it was running on T2 micro, now we would like to run it on M3 two times large instance. Previously it
had one virtual CPU 1 GB. Now it's going to have eight virtual CPU and 30 GB of RAM. So that's vertical scaling. On the
other hand, horizontal scaling is adding more nodes to it. Previously, it was running on one VM. Now it's going to run
on 2, 3, 10 VMs, right? That's horizontal uh scaling. So database can only be scaled vertically and there are
18 different types of instances we can resize our RDS to, right? So this is true for ideas, MySQL, Postgress, SQL,
Mariab, Oracle, Microsoft SQL servers. There are 18 type of instances we can vertically scale up to. On the other
hand, horizontal scaling are good for replicas. So they are readonly replicas. We're not going to touch the master
database. We're not going to touch the primary database. But I can do horizontal scaling only with Amazon
Aurora. And I can add additional read replicas. is I can add up to 15 uh read replicas for Amazon Aurora and up to
five read replicas for RDS, MySQL, Postgress, SQL and Mariab RDS instances and when we add replica we are
horizontally scaling adding more nodes right read only nodes so that's horizontal scaling so how do you really
decide between vertical and horizontal scaling if you're looking into increase the storage and the processing capacity
we'll have to do a vertical scaling. If you're looking at increasing the performance or of the read heavy
database, we need to be looking for horizontal scaling or we need to be implementing horizontal scaling in our
environment. Still talking about database, this is another good question you can expect in the interview. What is
a maintenance window in Amazon RDS? Will your DB instance be available during the maintenance uh event? Right. So this is
really to test how well you have understood the SLA, how well you have understood the Amazon RDS uh the
failover mechanism of Amazon RDS uh stuff like that. So, RDS maintenance window, it lets you decide when a DB
instance modification, a database engine upgrades or software uh patching has to occur and you you actually get to decide
should it happen at 12:00 in the night or should it happen at afternoon, should it happen early in the morning, should
it happen in the evening. You actually get to decide and automatic scheduling by Amazon is done only for patches that
are security and durability related. Sometimes Amazon takes down and does automatic scheduling uh if you know if
there is a need for a patch update that deals with security and durability and by default the uh maintenance window is
is for 30 minutes and the uh important point is the DB instance will be available during that event because
you're going to have primary and secondary right so when that upgrade happens Amazon would shift the
connection to the secondary do the upgrade and then switch back to the primary. Another classic question would
be what are the consistency models in Dynamo DB? In Dynamob, there is eventual consistency read. This eventual
consistency model, it actually maximizes your read throughput. And the best part with eventual consistency is all copies
of data reach uh consistency within a second. And sometimes when you write and when you're you know trying to read
immediately chances that you you would still be reading the old data that's eventual consistency. On the other hand
there is another consistency model called the strong consistency or strongly consistent read where there's
going to be a delay in writing the data. you know making sure the data is written in all places but it guarantees one
thing that is once you have done a write and then you're trying to do a read it's going to make sure that it's going to
show you the updated data not the old data you know you can be guaranteed of it that it is going to show the updated
data and not the old data that's strongly consistent still talking about uh database talking about NoSQL uh
Dynamob or NoSQL database which is uh Dynamob and Amazon you could be asked asked this question. What kind of uh
query functionality does Dynamob support? Dynamob supports get and put operation. Dynamob supports or Dynamo DB
provides flexible querying by letting you query on non primary key attributes using global secondary index and local
secondary indexes. A primary key can be either a single attribute partition key or a composite uh partition sort key. In
other words, a Dynamob indexes a composite partition sort key as a partition key element and a sort key
element. And by holding the partition key, you know, when doing a search or when doing a query, by holding the
partition key element constant, we can search across the sort key element to retrieve the other items in that uh
table. And the composite partition sort key should be a combination of user ID partition and a time stamp. So that's
what the composite partition sort key is made of. Let's look at some of the multiplechoice questions. You know,
sometimes some companies would have an a written test or an MCQ type online test before they call you for uh the first
level or before they call you for the second level. So these are some classical questions that uh companies
asked or companies ask in their multiplechoice online questions. Let's look at this question. As a developer
using this pay-per-use service you can send, store and receive messages between software components. Which of the
following is being referred here? Let's look at it. Right? We have AWS step functions, Amazon MQ, Amazon simple Q
service, Amazon simple notification service. Let's read the question again. As a developer using this pay-peruse
service, so the service that we are looking for is a pay-per use service. You can send, store and retrieve
messages between two software components. Kind of like a queue there. So what would be the right answer? It
would be Amazon simple Q service. Now Amazon simple Q service is the one that's used to decouple uh the
environment. You know it breaks the tight coupling and then it introduces decoupling uh in that environment by
providing a que or by inserting a queue between two software components. Let's look at this other question. If you
would like to host a real-time audio and video conferencing application on AWS, right? It's an audio and video
conferencing application on AWS. This service provides you with a secure and easy to use application. What is this
service? Let's look at the options. They are Amazon uh Chime, Amazon Workspace, Amazon MQ, Amazon AppStream. All right.
You might tend to look at uh Amazon AppStream because it's real time and video conference, but it's actually for
a different purpose. It's actually Amazon Chime that lets you create chat and create a chatboard and then
collaborate with the security of the AWS services. So, it lets you do the audio. It let it lets you do the video
conference all supported by AWS security features. It's actually Amazon Chime. Let's look at this question. As your
company's AWS solution architect, you are in charge of designing thousands of individual jobs which are similar. Which
of the following service best serves your requirement? AWS EC2 autoscaling, AWS snowball, AWS uh Fargate, AWS Batch.
Let's read the question again. As your company's AWS solution architect, you are in charge of designing thousands of
individual jobs which are similar. It looks like uh it's batch service. Let's look at the other options as well. AWS
snowball is actually an storage transport service. EC2 autoscaling is uh you know in introducing scalability and
elasticity in the environment and AWS Fargate is container services. AWS batch is the one is being referred here that
actually runs thousands of individual jobs which are similar. AWS batch is the right answer. Right. Let's look at the
other one. You are a machine learning engineer and you're looking for a service that helps you build and train
machine learning models in AWS. Which among the following are we referring to? So we have uh Amazon Sage Maker and AWS
deep lens, Amazon comprehend AWS device form. Let's read the question again. You are a machine learning engineer and
you're looking for a service that helps you build and train machine learning models in AWS. Which among the following
are referred here? The answer is uh Sage Maker. It provides um every developer and data scientist with the ability to
build, train and deploy machine learning models quickly. That's what SageMaker does. Now, for you to be familiar with
um you know the the products I would recommend you to you know simply go through the um product description you
know there's one page available in Amazon that explains all the products uh quick, neat and simple. you know that
really helps you to be very familiar with you know what the product is all about and what it is capable of you know
is it a DB service is it a machine learning service or is it a monitoring service is it a developer service stuff
like that so get that information get that details before you uh attend an interview and that should really help to
answer or phase such questions with great confidence so the answer is Amazon Sage Maker because that's the one that
provides developers and data scientists the ability to build train and deploy machine learning models quickly as
possible. All right, let's look at this one. Let's say that uh you are working for your company's ID team and you are
designated to adjust the capacity of the AWS resource based on the incoming application and network traffic. How do
you do it? So what's the service that's actually helps us to adjust the capacity of the AWS resource based on the
incoming application? Let's look at it. Amazon VPC, Amazon AM, Amazon Inspector, Amazon Elastic Load Balancing, Amazon
VPC is a networking service. Amazon AM is an username password authentication. Amazon Inspector is a service that
actually does security audit in our environment and Amazon elastic load balancer is a service that helps in uh
scalability. That's in one way, you know, indirectly that helps in yeah increasing the availability of the
application, right? and monitoring it monitoring you know how much requests are coming in through the elastic load
balancer we can actually adjust the uh environment that's running behind it so the answer is going to be Amazon elastic
load balancer right let's look at this question this crossplatform video game development engine that supports PC Xbox
iOS and Android platforms allows developers to build and host their games on Amazon's uh servers so we have uh
Amazon Amazon Game Lift, Amazon Greenrass, Amazon uh Lumberyard, Amazon uh Sumerian. Let's read the question
again. This crossplatform video game development engine that supports PC, Xbox, PlayStation, iOS and Android
platforms allows developers to build and host their games on Amazon servers. The answer is Amazon Lumberyard. This lumbar
is an free uh AAA gaming engine deeply integrated with AWS and uh Twitch with full source. This lumbar provides a
growing set of tools that helps you create an highest game quality applications and they connect to lot of
games and vast compute and storage in the cloud. So it's that service they are referring to. Let's look at this
question. You are the project manager of your company's cloud architect team. You are required to visualize, understand
and manage your AWS cost and usage over time. Which of the following service will be the best fit for this? We have
AWS budgets. We have AWS cost explorer. We have Amazon workmail. We have Amazon connect. And the answer is going to be
cost explorer. Now cost explorer is an option in the Amazon console that uh helps you to visualize and understand
and even manage the AWS cost over time. Who's spending more? Who's spending less? And what is the trend? What is the
projected cost for the coming month? All these can be visualized in AWS cost explorer. Let's look at this question.
You are a chief cloud architect at your company and how can you automatically monitor and adjust computer resources to
ensure maximum performance and efficiency of all scalable resources. So we have cloud formation as a service. We
have AWS Aurora as a solution. We have AWS autoscaling and Amazon API gateway. Let's read the question again. You're
the chief cloud architect at your company. How can you automatically monitor and adjust computer resources?
How can you automatically monitor and adjust computer resources to ensure maximum performance and efficiency of
all scalable resources? This is an easy question to answer. The answer is autoscaling. Right? That's a basic
service and solution architect uh uh course is it? Autoscaling is the service that helps us to easily adjust, monitor
and ensure the maximum performance and efficiency of all scalable resources. It does that by automatically scaling the
environment to handle the inputs. Let's look at this question. As a database administrator, you will use a service
that is used to set up and manage databases such as MySQL, My DB, and Postgress SQL. Which service are we
referring to? Amazon Aurora, Amazon Elastic Cache, AWS RDS, AWS Database Migration Service. Amazon Aurora is uh
Amazon's flavor of u the RDS service. And Elastic Cache is um is the caching service provided by Amazon. They are not
full-fledged database. And database migration service just like the name says it helps to migrate uh the database
from on premises to the cloud and from one uh database flavor to another database flavor. Amazon RDS is the
service is the console is the service is the umbrella service that helps us to set up manage databases like MySQL MARB
and Postgress SQL. It's Amazon RTS. Let's look at this last question. A part of your marketing work requires you to
push messages to onto Google, Facebook, Windows and Apple through APIs or AWS management console. You will use the
following service. So the options are AWS cloud trail, AWS config, Amazon chime, AWS simple notification service.
All right, it says a part of your marketing work requires you to push messages. It's dealing with pushing
messages to Google, Facebook, Windows and Apple through APIs or AWS management console. You will use the following
service. It's simple notification service. Simple notification service is an message pushing uh service, right?
SQS is pulling. Similarly, SNS is pushing, right? Here it talks about a pushing system that pushes messages to
Google, Facebook, Windows and Apple through API. And it's going to be a simple notification system or a simple
notification
The course introduces key cloud computing concepts such as scalable internet-based services, cloud deployment models (public, private, hybrid), service models including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), and core cloud architecture principles. Understanding these basics helps build a strong foundation for working with AWS and Azure platforms.
Beginners should focus on AWS services like EC2 for virtual computing, S3 for scalable object storage, Lambda for serverless computing, Elastic Beanstalk for easy application deployment, Route 53 for DNS management, and Identity and Access Management (IAM) for security. Mastering these services provides essential skills required for many AWS projects and real-world applications.
Azure offers extensive enterprise support through a global network of 42+ data centers, services like Virtual Machines, Azure Functions for serverless workloads, advanced networking options including CDN and ExpressRoute, comprehensive storage solutions, databases, and AI/ML capabilities. Its robust security features such as identity management, the Security Center, and Key Vault ensure data protection tailored to corporate environments.
Key security practices include implementing strong encryption methods, enforcing multi-factor authentication (MFA), continuous monitoring for threats, securing APIs, managing patches promptly, conducting regular employee security training, segmenting networks to limit exposure, and performing continuous audits. These steps help prevent data breaches, denial of service attacks, and insider threats in cloud deployments.
Advanced AWS services include SageMaker, which enables building and deploying machine learning models with ease; CloudFront, a global Content Delivery Network ensuring fast, secure content delivery; Auto Scaling for automatically adjusting infrastructure capacity based on demand; and Redshift, a managed data warehouse solution for fast analytics. Utilizing these services helps optimize performance, scalability, and cost-efficiency in complex cloud applications.
The course recommends projects such as website hosting, building expense tracking applications, IoT analytics platforms, video streaming services, and deploying machine learning models. Engaging in these projects allows learners to apply theoretical knowledge, understand real-world challenges, and demonstrate skills beneficial for cloud-related job roles.
Effective preparation involves comprehensive study of core services, hands-on practice through projects, and reviewing practical scenario-based interview questions and answers. The course also suggests targeted resources like exam guides for Azure DevOps and provides a structured learning path through certifications such as AWS Solutions Architect and Microsoft Azure Developer Associate, enabling confident exam success.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
Understanding Cloud Computing: A Comprehensive Guide to AWS and S3
This video provides an in-depth exploration of cloud computing, focusing on AWS services, including EC2, S3, and the importance of cloud architecture. It covers practical examples, deployment models, and the significance of cloud roles and policies for effective management.
Complete Microsoft Azure Developer Associate (AZ-204) Study Guide
This comprehensive guide covers the Microsoft Azure Developer Associate AZ-204 certification exam, providing insights into Azure functions, storage options, app services, database management, security, monitoring, and API management. It includes practical examples, key concepts, and hands-on tips to help you prepare effectively and achieve certification success.
Top AWS Services Explained for Beginners: EC2, S3, IAM & More
Discover the most frequently used AWS services in this beginner-friendly overview. Learn how to navigate AWS, manage users with IAM, launch virtual servers with EC2, store data using S3, and handle containers with ECS and EKS. This video also covers essential services like CloudWatch, CloudTrail, RDS, and messaging with SQS and SNS.
Comprehensive Python Course: From Basics to Advanced Mega Projects
This extensive Python course covers everything from fundamental programming concepts, data types, and control flow to advanced topics like OOP, file handling, virtual environments, and AI integration. Featuring practical projects including a Jarvis assistant and chatbot, it equips learners with hands-on skills for professional growth and job readiness.
100 Most Important MCQs on Cloud Computing by Neptel
In this video, Neptel presents a comprehensive set of 100 multiple-choice questions (MCQs) on cloud computing, covering key concepts, service models, and deployment strategies. Viewers are encouraged to listen carefully and memorize the questions for better understanding and preparation.
Most Viewed Summaries
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
Pamamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakaran ng mga Espanyol sa Pilipinas, at ang epekto nito sa mga Pilipino.
Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.

