Introduction to Cloud Computing
Cloud computing delivers scalable services such as storage, computing power, and networking over the internet. This course provides a thorough introduction covering key topics in AWS and Azure, including cloud security, deployment models, service models (IaaS, PaaS, SaaS), and cloud architecture principles. For a deeper foundational understanding, consider exploring Understanding Cloud Computing: A Comprehensive Guide to AWS and S3.
AWS Overview
- History and Growth: Launched in 2002 with rapid expansion; offers over 100 cloud services.
- Key Services: EC2 (compute), S3 (storage), Lambda (serverless computing), Elastic Beanstalk (application deployment), Route 53 (DNS), and more. Learn more about these core services in Top AWS Services Explained for Beginners: EC2, S3, IAM & More.
- Security: Robust data protection including IAM, encryption, and compliance.
- Usage: Adopted by companies like Netflix, Airbnb, and Adobe for scalability and performance.
Azure Overview
- Launch and Market Adoption: Launched in 2010, supports 80% of Fortune 500 companies.
- Data Centers: Extensive global presence with 42+ data centers.
- Services: Virtual Machines, Azure Functions (serverless), networking (CDN, ExpressRoute), storage solutions, databases, AI & ML services.
- Security & Management: Comprehensive identity management, security center, key vault, and monitoring tools. For an in-depth Azure developer perspective, see Complete Microsoft Azure Developer Associate (AZ-204) Study Guide.
Cloud Security Essentials
- Emphasizes encryption, access control with MFA, monitoring, secure API practices.
- Addresses threats like data breaches, denial of service attacks, insider threats.
- Best practices include patch management, employee training, network segmentation, and continuous auditing.
Core AWS Services in Detail
- S3: Object storage with lifecycle management, bucket policies, versioning, encryption, cross-region replication, and transfer acceleration.
- IAM: User, groups, roles, and policies management with MFA and federated access.
- ECS: Container management service orchestrating Docker containers with Fargate and EC2 modes.
- Elastic Beanstalk: Platform to deploy and manage applications easily with auto-scaling and load balancing.
- Route 53: Scalable DNS service supporting routing policies like failover, geolocation, latency-based, and weighted routing.
AWS Advanced Services
- SageMaker: Managed machine learning platform enabling building, training, tuning, and deploying ML models with integrated tools.
- CloudFront: Content Delivery Network providing secure, low-latency content delivery globally.
- Auto Scaling: Automatic infrastructure scaling based on application demand, integrating with load balancers.
- Redshift: Fully managed data warehouse with fast, scalable analytics.
Learning and Career Advancement
- Comprehensive courses like Post-Graduate Cloud Computing Program, Cloud Solutions Architect Masters, and AWS/Azure certifications.
- Hands-on projects spanning website hosting, expense tracking apps, IoT analytics, video streaming services, and machine learning deployments.
- Preparation for interviews via practical scenario questions and answers across core AWS and Azure topics. For targeted exam readiness, check out the Ultimate Guide to Azure DevOps Certification Course: Pass the Exam with Confidence.
Key Takeaways
- Cloud computing offers flexible, scalable, and cost-efficient solutions vital for modern business innovation.
- A solid grasp of AWS and Azure services, architectures, and security fundamentals is essential.
- Hands-on projects and certifications are crucial for proving expertise and advancing careers.
- Continual learning and adaptation to new tools and services enable long-term success in cloud careers.
This summary equips learners and professionals with foundational knowledge and practical insights to excel in cloud computing roles, focusing on the most widely used cloud platforms and services.
Hello and welcome to this cloud computing full course by simply learn. Did you know that the cloud engineers
are among the most in demand professionals today? As more companies shift online, cloud computing is rapidly
expanding making skilled cloud professional highly valuable. With excellent salaries and career growth
opportunities, it's a great time to enter this field. But what exactly is cloud computing? In simple terms, it is
the delivery of service like storage, computing power and networking over the internet known as the cloud. This
technology allows businesses to scale quickly and efficiently. So in this course, we will walk you through
essences topics such as an introduction to cloud computing and steps to become a cloud engineer. We will also cover cloud
security, compare AWS and Azure and explore every topic you need to know in order to master cloud computing and AWS.
We will also discuss project ideas and certification to boost your cloud computing career. So before we begin, if
you are interested in getting certified in cloud, check out Simpson's cloud architect certification program. Build
expertise in AWS, Microsoft Azure and GCP with our cloud architect certification course. Plus, we have
included an exam voucher for any one Azure course. So you can get certified hassle-free. Gain access to official AWS
authorized self-arning content and master the ins and outs of cloud architectural principle. So don't forget
to check out the course link from the description box below and the pin comment. So without any further ado,
let's get started. Imagine you're the owner of a small software development firm and you want to scale your business
up. However, a small team size, the unpredictability of demand and limited resources are roadblocks for this
expansion. That's when you hear about cloud computing. But before investing money into it, you decide to draw up the
differences between on-remise and cloud-based computing to make a better decision. When it comes to scalability,
you pay more for an on-remise setup and get lesser options, too. Once you've scaled up, it is difficult to scale down
and often leads to heavy losses in terms of infrastructure and maintenance costs. Cloud computing on the other hand allows
you to pay only for how much you use with much easier and faster provisions for scaling up or down. Next, let's talk
about server storage. On premise systems need a lot of space for their servers, notwithstanding the power and
maintenance hassles that come with them. On the other hand, cloud computing solutions are offered by cloud service
providers who manage and maintain the servers, saving you both money and space. Then we have data security.
On-remise systems offer less data security thanks to a complicated combination of physical and traditional
IT security measures. Whereas cloud computing systems offer much better security and let you avoid having to
constantly monitor and manage security protocols. In the event that a data loss does occur, the chance for data recovery
with on-remise setups are very small. In contrast, cloud computing systems have robust disaster recovery measures in
place to ensure faster and easier data recovery. Finally, we have maintenance. On premises systems also require
additional teams for hardware and software maintenance, loading up the costs by a considerable degree. Cloud
computing systems on the other hand are maintained by the cloud service providers, reducing your costs and
resource allocation substantially. So now thinking that cloud computing is a better option, you decide to take a
closer look at what exactly cloud computing is. Cloud computing refers to the delivery of ondemand computing
services over the internet on a pay as you go basis. In simpler words, rather than managing files and services on a
local storage device, you'll be doing the same over the internet in a costefficient manner. Cloud computing
has two types of models, deployment model and service model. There are three types of deployment models: public,
private, and hybrid cloud. Imagine you're traveling to work. You've got three options to choose from. One, you
have buses, which represent public clouds. In this case, the cloud infrastructure is available to the
public over the internet. These are owned by cloud service providers. Two, then you have the option of using your
own car. This represents the private cloud. With the private cloud, the cloud infrastructure is exclusively operated
by a single organization. This can be managed by the organization or a third party. And finally, you have the option
to hell a cab. This represents the hybrid cloud. A hybrid cloud is a combination of the functionalities of
both public and private clouds. Next, let's have a look at the service models. There are three major service models
available. EAS, pass and SAS. Compared to on-remise models where you'll need to manage and maintain every component
including applications, data, virtualization, and middleware, cloud computing service models are
hassle-free. Refers to infrastructure as a service. It is a cloud service model where users get access to basic
computing infrastructure. They are commonly used by IT administrators. If your organization requires resources
like storage or virtual machines, is the model for you. You only have to manage the data runtime, middleware,
applications, and the OS while the rest is handled by the cloud providers. Next, we have pass. Pass or platform as a
service provides cloud platforms and runtime environments for developing, testing, and managing applications. This
service model enables users to deploy applications without the need to acquire, manage, and maintain the
related architecture. If your organization is in need of a platform for creating software applications, pass
is the model for you. PASS only requires you to handle the applications and the data. The rest of the components like
runtime, middleware, operating systems, servers, storage, and others are handled by the cloud service providers. And
finally, we have SAS. SAS or software as a service involves cloud services for hosting and managing your software
applications. Software and hardware requirements are satisfied by the vendors. So you don't have to manage any
of those aspects of the solution. If you'd rather not worry about the hassles of owning any IT equipment, the SAS
model would be the one to go with. With SAS, the cloud service provider handles all components of the solution required
by the organization. Time for a quiz now. In which of the following deployment models are you as the
business responsible for the application data and operating system? One is two pass three SAS four is and pass. Let us
know your answer in the comment section below for a chance to win an Amazon voucher. Meet Rob. He runs an online
shopping portal. The portal started with a modest number of users, but has recently been seeing a surge in the
number of visitors. On Black Friday and other holidays, the portal saw so many visitors that the servers were unable to
handle the traffic and crashed. Is there a way to improve performance without having to invest in a new server?
Wondered Rob a way to upscale or downscale capacity depending on the number of users visiting the website at
any given point. Well, there is. Amazon Web Services, one of the leaders in the cloud computing market. Before we see
how AWS can solve Rob's problem, let's have a look at how AWS reached the position it is at now. AWS was first
introduced in 2002 as a means to provide tools and services to developers to incorporate features of Amazon.com to
their website. In 2006, its first cloud services offering was introduced. In 2016, AWS surpassed its 10 billion
revenue target. And now AWS offers more than 100 cloud services that span a wide range of domains. Thanks to this, the
AWS cloud service platform is now used by more than 45% of the global market. Now let's talk about what is AWS. AWS or
Amazon Web Service is a secure cloud computing platform that provides computing power, database, networking,
content storage, and much more. The platform also works with a pay as you go pricing model, which means you only pay
for how much of the services offered by AWS you use. Some of the other advantages of AWS are security. AWS
provides a secure and durable platform that offers end to-end privacy and security experience. You can benefit
from the infrastructure management practices born from Amazon's years of experience. Flexible. It allows users to
select the OS, language, database, and other services. Easy to use. Users can host applications quickly and securely.
Scalable. Depending on user requirements, applications can be scaled up or down. AWS provides a wide range of
services across various domains. What if Rob wanted to create an application for his online portal? AWS provides compute
services that can support the app development process from start to finish. From developing, deploying,
running to scaling the application up or down based on the requirements. The popular services include EC2, AWS
Lambda, Amazon Light Cell, and Elastic Beantock. For storing website data, Rob could use AWS storage services that
would enable him to store, access, govern, and analyze data to ensure that costs are reduced, agility is improved,
and innovation accelerated. Popular services within this domain include Amazon S3, EBS, S3 Glacier, and
Elastic File Storage. Rob can also store the user data in a database with AWS services, which he can then optimize and
manage. Popular services in this domain include Amazon RDS, Dynamo DB, and Red Shift. If Rob's businesses took off and
he wanted to separate his cloud infrastructure or scale up his work requests and much more, he would be able
to do so with the networking services provided by AWS. Some of the popular networking services include Amazon VPC,
Amazon Route 53, and Elastic Load Balancing. Other domains that AWS provides services in are analytics,
blockchain, containers, machine learning, internet of things and so on. And there you go. That's AWS for you in
a nutshell. Now, before we're done, let's have a look at a quiz. Which of these services are incorrectly matched?
1 [Music] 2
3 4. We'll be pinning the question in the comment section. Comment below with your
answer and stand a chance to win an Amazon voucher. Several companies around the world have found great success with
AWS. Companies like Netflix, Twitch, LinkedIn, Facebook, and BBC have taken advantage of the services offered by AWS
to improve their business efficiency. And thanks to their widespread usage, AWS professionals are in high demand.
They're highly paid and earn up to more than $127,000 perom. Once you're AWS
certified, you could be one of them, too. Hello everyone, welcome back to the channel. Today, I want to take you on a
journey that could transform your career, much like how cloud computing has transformed some of the world's most
innovative companies. Imagine Netflix, once a DVD rental service transforming into a streaming giant capable of
delivering highdefinition content to millions of users simultaneously. Or consider Airbnb, which has used cloud
computing to manage listings and bookings for millions of properties around the globe, providing a seamless
experience for host and travelers alike. Both Netflix and Airbnb utilized cloud technologies to efficiently scale their
businesses, manage large volumes of data and ensure high availability and performance. So by transitioning from
traditional costly and inflexible on premises infrastructure to scalable cloud environments, they significantly
reduce cost, accelerated innovation and improved user experience in real time. Now you might think that working on such
impactful projects requires years of experience and advanced degrees. But there's the good news guys. With the
right approach, you can start a career in cloud engineering in just 3 months. even if you are starting from scratch.
In this video, I will outline a clear, actionable plan that uses entirely free online resources to get you there. We
will cover the essential skills you need to learn, the certifications that can help validate your knowledge, and
practical projects that will make your resumes stand out. So, if you're ready to dive into the world of cloud
computing and perhaps one day contribute to the next big thing in tech, so stay tuned, guys. So, let's get started. And
the number one point you should start with is starting your cloud journey. So transitioning into cloud engineering may
seem daunting especially if you are new to this field. The first step is understanding why this is a valuable
career move. The cloud industry is booming with a projected market value of $800 billion by 2025 and the potential
to grow even further. This growth means a constant demand for skilled professionals making it an excellent
time to enter the field. Now that we understand the industry's potential, the next question is where should you start?
So you should choose a cloud provider. So choosing a cloud provider is a critical decision as it shapes your
learning path and future jobs opportunities. So the three major players are AWS, Azure and Google Cloud
Platform GCP. So starting with AWS. So AWS that is Amazon web services is often recommended for beginners because it has
the largest market share and a wide range of services which translates into more job opportunities. Now coming to
Azure that is another strong option especially if you're targeting jobs in enterprises that use Microsoft
technologies. Now coming to GCP that is Google cloud platform and it is gaining popularity and offers excellent features
especially in data analytics and machine learning. For beginners AWS is popular choice due to its widespread use and
extensive documentation. However, it's important to research the demand in your local job market and consider your own
interest when making a decision. And with the cloud provider chosen, the next step is to build a strong foundation in
the fundamental technologies that underpin cloud computing. So now before diving into cloud specific services,
it's essential to understand the foundational technologies that cloud computing relies on. These include
number one comes networking. So understanding how data moves across networks is crucial for setting up and
managing cloud infrastructure. Then comes operating systems. Familiarity with operating systems particularly
Linux is essential as most crowd environments run on Linux servers. Then comes virtualization. So this is the
process of creating virtual instances of physical hardware. That's a core concept in cloud computing. And then comes
databases. So knowledge of databases both relational and non-reational is critical for managing data in the cloud.
So with these foundational skills in place you are now ready to explore cloudspecific learning paths. So let's
start with certifications. So certifications can validate your knowledge and make you stand out in the
job market. For AWS, starting with the AWS cloud practitioner certification is advisable. This certification provides a
broad overview of cloud concepts and AWS services. It covers key areas such as compute services, storage options,
security measures, networking capabilities, and billing and pricing structures. Now coming back, while
certifications are valuable, they need to be complemented with practical hands-on experience to truly demonstrate
your skills. Here comes building projects or hands-on practice. So building projects is the most effective
way to apply what you have learned and to demonstrate your abilities to potential employers. So here are a few
beginner friendly projects to consider. Number one is setting up virtual machines. So start by launching an EC2
instance on AWS. Learn about the different instance types, configurations and the basics of server management.
Then comes the next project that is cloud storage systems. So experiment with services like S3 for object storage
and RDS for relational databases. Document the use cases and differences between these services. Then deploy a
web application. Host a static website using S3 and CloudFront which will teach you about web hosting, content delivery
and the basics of DNS management with route 53. Initially you can use the AWS console for these task but as you
progress try implementing these projects using infrastructure as core tools like Terapform. This approach not only
deepens your understanding but also aligns with industry best practices. In addition to practical projects, having
some coding knowledge can greatly enhance your capabilities as a cloud engineer. So now we'll see how you can
learn to code. While not always mandatory, coding skills can significantly enhance your effectiveness
as a cloud engineer. Languages like Python and Bash are particularly useful for scripting and automation. Even a
basic understanding can help with tasks such as writing scripts or server automation, managing cloud services or
resources programmatically, then implementing infrastructure as code. For those new to coding, check out simply
learn videos on YouTube, which offers excellent starting points. Coding skills not only make you more versatile, but
also open up opportunities to specialize in areas like DevOps or cloudnative development. And once you have built
your skills and some projects, it's time to start with the job hunting process. That is building your profile. Creating
a strong online presence is crucial when job hunting. Your LinkedIn profile should clearly reflect your new skills,
certifications, and projects. So here are some tips. Number one is optimize your LinkedIn profile. That is include a
professional photo and engaging summary and detailed description of your projects. Then comes network actively.
Connect with professionals in the field. Join cloud computing groups and participate in discussions. And then
comes apply strategically. Tailor your resume for each job application, highlighting the skills and projects
that align with the job description. Applying for jobs can be a number game. So be persistent. It's also helpful to
reach out to recruiters or hiring managers directly to express your interest in the role. As you start to
gain experience in your first cloud role, consider specializing in a niche area to advance your career. And then
comes specializing and continuous learning. So specializing in a particular area of cloud computing can
make you more valuable and increase your earning potential. Possible specializations include DevOps that is
it focus on automation, continuous integration and continuous deployment practices. Then comes serverless
computing work with functions as a service that is FAS and other serverless architectures. And then comes security.
specialize in cloud security to protect data and infrastructure. The cloud industry is dynamic with new tools and
technologies emerging regularly. So continuous learning is key. So stay updated through online courses, webinars
and industry news. Finally, remember that the journey into cloud engineering is continuous and ever evolving. So we
talk about resources. So embarking on a career in cloud engineering is challenging but highly rewarding.
Utilize free resources like YouTube tutorials, community forums and documentation to guide your learning.
Before cloud computing existed, if we need any IT servers or application, let's say a basic web server, it does
not come easy. Now, here is an owner of a business and I know you would have guessed it already that he's running and
successful business by looking at the hot and fresh brewed coffee in his desk and lots and lots of paperwork to review
and approve. Now he had a smart not only smartlooking but a really smart worker in his office called Mark and on one
fine day he called Mark and said that he would like to do business online. In other words, he would like to take his
business online and for that he needed his own website as the first thing. And Mark puts all his knowledge together and
comes up with this requirement that his boss would need lots of servers, uh, database and softwares to get his
business online, which means a lot of investment. And Mark also adds that his boss will need to invest on acquiring
technical expertise to manage the hardware and software that they will be purchasing and also to monitor the
infrastructure. And after hearing all this, his boss was close to dropping his plan to go online. But before he made a
decision, he chose to check if there are any alternatives where he don't have to spend a lot of money and don't have to
spend acquiring technical expertise. Now that's when Mark opened this discussion with his boss and he explained his boss
about cloud computing and he explained his boss the same thing that I'm going to explain to you in some time now about
what is cloud computing. What is cloud computing? Cloud computing is the use of a network of remote servers hosted on
the internet to store, manage and process data rather than having all that locally and using local server for that.
Cloud computing is also storing our data in the internet from anywhere and accessing our data from anywhere
throughout the internet. And the companies that offer those services are called cloud providers. Cloud computing
is also being able to deploy and manage our applications, services and network throughout the globe and manage them
through the web management or configuration portal. In other words, cloud computing service providers give
us the ability to manage our applications and services through a global network or internet. Example of
such providers are Amazon web service and Microsoft Azure. Now that we have known what cloud computing is, let's
talk about the benefits of cloud computing. Now I need to tell you the cloud benefits is what is driving cloud
adoption like anything in the recent days. If I want an IT resource or a service now with cloud, it's available
for me almost instantaneously and it's ready for production almost the same time. Now this reduces the go live date
and the product and the service hit the market almost instantaneously compared to the legacy environment and because of
this the companies have started to generate revenue almost the next day if not the same day. Planning and buying
the right size hardware has always been a challenge in legacy environment. And if you're not careful when doing this,
we might need to live with a hardware that's undersized for the rest of our lives. With cloud, we do not buy any
hardware. But we use the hardware and pay for the time we use it. If that hardware does not fit our requirement,
release it and start using a better configuration and pay only for the time you use that new and better
configuration. In legacy environments, forecasting demand is an full-time job. But with cloud, you can let the
monitoring and automation tool to work for you and to rapidly scale up and down the resources based on the need of that
R. Not only that, the resources, services, data can be accessed from anywhere as long as we are connected to
the internet. And even there are tools and techniques now available which will let you to work offline and will sync
whenever the internet is available. Making sure the data is stored in durable storage and in a secure fashion
is the talk of the business and cloud answers that million-doll question. With cloud the data can be stored in an
highly durable storage and replicated to multiple regions if you want and uh the data that we store is encrypted and
secured in a fashion that's beyond what we can imagine in local data centers. Now let's bleed into the discussion
about the types of cloud computing. Very lately there are multiple ways to categorize cloud computing because it's
ever growing. Now we have more categories. Out of all these six sort of stand out you know categorizing cloud
based on deployments and categorizing cloud based on services and again under deployments categorizing them based on
how they have been implemented. You know is it private is it public or is it hybrid and again categorizing them based
on the service it provides. Is it infrastructure as a service or is it platform as a service or is it software
as a service? Let's look at them one by one. Let's talk about the different types of cloud based on the deployment
models. First, in public cloud, everything is stored and accessed in and through the internet. And um any
internet users with proper permissions can be given access to some of the applications and resources. And in
public cloud, we literally own nothing. Be it the hardware or software, everything is managed by the provider.
AWS, Azure and Google are some examples of public cloud. Private cloud on the other hand with private cloud the
infrastructure is exclusively for an single organization. The organizations can choose to run their own cloud
locally or choose to outsource it to a public cloud provider as managed services and when this is done the
service the infrastructure will be maintained on a private network. Some examples are VMware cloud and some of
the AWS products are very good example for private cloud. Hybrid cloud has taken things to the whole new level.
With hybrid cloud, we get the benefit of both public and private cloud. Organizations will choose to keep some
of their applications locally and some of the application will be present in the cloud. One good example is NASA. It
uses hybrid cloud. It uses private cloud to store sensitive data and uses public cloud to store and share data which are
not sensitive or confidential. Let's now discuss about cloud based on service model. The first and the broader
category is infrastructure as a service. Here we would uh rent the servers network storage and we'll pay for them
in an hourly basis but we will have access to the resources we provision and for some we will have root level access
as well. EC2 in AWS is a very good example. It's a WM for which we have root level access to the OS and admin
access to the hardware. The next type of service model would be platform as a service. Now in this model the providers
will give me a pre-built platform where we can deploy our codes and our applications and they will be up and
running. We only need to manage the codes and not the infrastructure. Here in software as a service the cloud
providers sell the end product which is a software or an application and we directly buy the software on an
subscription basis. It's not the infra or the platform but the end product or the software or a functioning
application and we pay for the hours we use the software and in here the client maintains full control of the software
and does not maintain any equipment. Amazon and Azure also sell products that are software as service. This chart sort
of explains the difference between the four models starting from on premises to infrastructure as a service to platform
as a service to software as a service. This is self-explanatory that uh the resource managed by us are huge in on
premises that towards your left as you watch and it's little less in infrastructure as a service as we move
further towards the right and further reduced in platform as a service and there's really nothing to manage when it
comes to software as a service because we buy the software not any infrastructure component attached to it.
Now let's talk about the life cycle of the cloud computing solution. The very first thing in the life cycle of a
solution or a cloud solution is to get a proper understanding of the requirement. I didn't say get the requirement but
said get a proper understanding of the requirement. It is very vital because only then we will be able to properly
pick the right service offered by the provider. Getting a sound understanding the next thing would be to define the
hardware. Meaning choose the comput service that will provide the right support where you can resize the compute
capacity in the cloud to run application programs. Getting a sound understanding of the requirement helps in picking the
right hardware. One size does not fit all. There are different services and hardwares for different needs you might
have like EC2 if you're looking for is and lambda if you're looking for serverless computing and ECS that
provides containerized servers. So there are a lot of hardwares available. Pick the right hardware that suits your
requirement. The third thing is to define the storage. Choose the appropriate storage service where you
can back up your data and a separate storage service where you can archive your data locally within the cloud or
from the internet and choose the appropriate storage. There is one separately for backup called S3 and
there is one separately for archival that's for glacier. So you know you knowing the difference between them
really helps in picking the right service for the right kind of need. Define the network. Define the network
that securely delivers data, video and applications. Define and identify the network services properly. For example,
VPC for network, route 53 for DNS and direct connection for private P2P line from your office to the AWS data center.
Set up the right security services. IM for authentication and authorization and KMS for uh data encryption at rest. So
there are variety of security products available. We got to pick the right one that suits our need. And there are a
variety of deployment and automation and monitoring tools that you can pick from. For example, cloudatch is for
monitoring. Autoscaling is for being elastic and cloud formation is define the management process and tools. You
can have complete control of your cloud environment if you define the management tools which monitors your AWS resources
and or the custom applications running on AWS platform. There are variety of deployment automation and monitoring
tools you can pick from like cloudatch for monitoring, autoscaling for automation and cloud formation for a
deployment. So knowing them will help you in defining the life cycle of the cloud computing solution properly. And
similarly there are a lot of tools for testing a process like code star and code build and code pipeline. These are
tools with which you can build, test and deploy your code quickly. And finally once everything is said and done pick
the analytics service for analyzing and visualizing the data using the analytics services where we can start quering the
data instantly and get a result. Now if you want to visually view the happenings in your environment you can pick attenna
and other tools for analytics or EMR and which is elastic map reduce and cloud search. Thanks guys. Now we have Samuel
and Rahul to take us through the full course in which they will explain basic framework of Amazon Web Services and
explore all of its important services like EC2, Lambda, S3, AM and cloud formation. We'll also talk about Azure
and some of its popular services. Hello everyone. Let me introduce myself as Sam, a multiplatform cloud architect and
trainer. And I'm so glad and I'm equally excited to talk and walk you through this session about what AWS is and talk
to you about some services and offerings and about how companies get benefited by migrating their applications and infra
into AWS. So what's AWS? Let's talk about that. Now before that let's talk about how life was without any cloud
provider and in this case how life was without AWS. So let's walk back and picture how things were back in 2000
which is not so long ago but lot of changes lot of changes for better had happened since that time. Now back in
2000 a request for a new server is not an happy thing at all because lot of uh money lot of uh validations lot of
planning are involved in getting a server online or up and running. And even after we finally got the server
it's not all said and done. There a lot of optimization that needs to be done on that server to make it worth it and get
a good return on investment from that server and even after we have optimized for a good return on investment the work
is still not done. There will often be a frequent increase and decrease in the capacity and you know even news about
our website getting popular and getting more hits. still an bittersweet experience because now I need to add
more servers to the environment which means that it's going to cost me even more. But thanks to the present-day
cloud technology, if the same situation were to happen today, my new server, it's almost ready and it's ready
instantaneously. And with the swift tools and technologies that Amazon is providing u in provisioning my server
instantaneously and adding any type of workload on top of it and making my storage and server secure you know
creating a durable storage where data that I store in the cloud never gets lost with all that features Amazon has
got our back. So let's talk about what is AWS. There are a lot of definitions for it but u I'm going to put together a
simple and a precise definition as much as possible. Now let me iron that out. Cloud still runs on an hardware. All
right. And uh there are certain features in that infrastructure in that cloud infrastructure that makes cloud cloud or
that makes AWS a cloud provider. Now we get all the services, all the technologies, all the features and all
the benefits that we get in our local data center like you know security and compute capacity and uh databases. And
in fact you know we get even more cool features like uh content caching in various global locations around the
planet. But again out of all the features the best part is that I get or we get everything on a pay as we go
model. The less I use, the less I pay. And the more I use, the less I pay per unit. Very attractive, isn't it? Right.
And that's not all. The applications that we provision in AWS are very reliable because they run on an reliable
infrastructure and it's very scalable because it runs on an ondemand infrastructure and it's very flexible
because of the designs and because of the design options available for me in the cloud. Let's talk about how all this
happened. AWS was launched in uh 2002 after the Amazon we know as the online retail store wanted to sell their
remaining or unused infrastructure as a service or as an offering for customers to buy and use it from them you know
sell infrastructure as a service the idea sort of clicked and uh AWS launched their first product first product in
2006 that's like 4 years after the idea launch and In 2012, they held a big-sized customer event to gather
inputs and concerns from customers and they were very dedicated in making those requests happen. And that habit is still
being followed. It's still being followed as reinvent by AWS. And at 2015, Amazon announced its revenue to be
4.6 billion. And in 2015 through 2016 AWS launched products and services that help migrate customer services into AWS.
Well, they were products even before but this is when a lot of focus was given on developing migrating services and in the
same year that's in 2016 Amazon's revenue was 10 billion and not but not the least as we speak Amazon has more
than 100 products and services available for customers and get benefited from all right let's talk about the uh services
that are available in uh Amazon let's start with this product called S3. Now S3 is an great tool for internet backup
and it's it's the cheapest storage option in the object storage category. And not only that, the data that we put
in S3 is retrievable from the internet. S3 is really cool. And we have other products like migration and data
collection and data transfer products. And here we can not only collect data seamlessly but also in a realtime way
monitor the data or analyze the data that's being received that there are cool products like uh AWS data transfers
available that helps achieve that and then we have products like uh EC2 elastic compute cloud that's an
resizable computer where we can anytime anytime alter the size of the computer based on the need or based on the
forecast. Then we have simple notification services, systems and tools available in Amazon to update us with
notifications through email or through SMS. Now anything anything can be sent through email or through SMS if you use
that service. It could be alarms or uh it could be service notifications if you want stuff like that. And then we have
some security tools like KMS key management system which uses AES 256bit encryption to encrypt our data at rest.
Then we have lambda, a service for which we pay only for the time in seconds. Seconds it takes to execute our code.
And uh we're not paying for the infrastructure here. It's just the seconds the program is going to take to
execute the code. If it's a short program, we'll be paying in milliseconds. If it's a a bit bigger
program, we'll be probably paying in uh 60 seconds or 120 seconds. But that's lot cheap lot simple and lots cost
effective as against paying for service on an hourly basis which a lot of other services are. Well that's cheap but
using lambda is a lot cheaper than that. And then we have services like uh route 53 a DNS service in the cloud and now I
do not have to maintain an DNS account somewhere else and my cloud environment with AWS. I can get both in the same
place. All right, let me talk to you about um how AWS makes life easier or how companies got benefited by using AWS
as their IT provider for their applications or for the infrastructure. Now, Uni liver is a company and um they
had a problem right and they had a problem and they picked AWS as a solution to their problem right now.
This company was sort of spread across 190 countries and they were relying on a lot of digital marketing for promoting
their products and their existing environment, their legacy local environment proved not to support their
changing IT demands and they could not standardize their old environment. Now they chose to move part of their
applications to AWS because they were not getting what they wanted in their local environment. And since then you
know rollouts were easy, provisioning your applications became easy and even provisioning infrastructure became easy
and they were able to do all that in push button scaling and uh needless to talk about uh backups that are safe and
backups that can be securely accessed from the cloud as needed. Now that company is growing along with AWS
because of their swift speed in rolling out deployments and uh being able to access secure backups from various
places and generate reports and in fact useful reports out of it that helps their business. Now on the same lines
let me also talk to you about Kelloggs and how they got benefited by using Amazon. Now Kelloggs had a different
problem. It's one of its kind. Now their business model was very dependent on uh an infer that will help to analyze data
really fast right because they were running promotions based on the analyzed data that they get. So they being able
to respond to the analyzed data as soon as possible was critical or vital in their environment and luckily SAP
running on Hannah environment is what they needed and uh you know they picked that service in the cloud and that sort
of solved the problem. Now the company does not have to deal with maintaining their legacy infra and maintaining their
heavy compute capacity and maintaining their database locally. All that is now moved to the cloud or they are using
cloud as their IT service provider and and now they have a greater and powerful IT environment that very much
complements their business. Hi there, I'm Samuel, a multiplatform cloud architect and I'm very excited and
honored to walk you through this learning series about AWS. Let me start the session with this scenario. Let's
imagine how life would have been without Spotify. For those who are hearing about Spotify for the first time as Spotify is
an online music service offering and it offers instant access to over 16 million licensed songs. Spotify now uses AWS
cloud to store the data and share it with their customers. But prior to AWS, they had some issues. Imagine using
Spotify before AWS. Let's talk about that. Back then, users were often getting errors because Spotify could not
keep up with the increased demand for storage every new day. And that led to users getting upset and users cancelling
the subscription. The problem Spotify was facing at that time was their users were present globally and were accessing
it from everywhere and uh they had different latency in their applications and Spotify had a demanding situation
where they need to frequently catalog the songs released yesterday, today and in the future. And this was changing
every new day and the songs coming in rate was about 20,000 a day and back then they could not keep up with this
requirement and needless to say they were badly looking for way to solve this problem and that's when they got
introduced to AWS and it was a perfect fit and match for their problem. AWS offered a dynamically increasing storage
and that's what they needed. AWS also offered tools and techniques like storage life cycle management and
trusted advisor to properly utilize the resource so we always get the best out of the resource used. AWS addressed
their concerns about easily being able to scale. Yes, you can scale the AWS environment very easily. How easily, one
might ask? It's just a few button clicks. And AWS solved Spotify's problem. Let's talk about how it can
help you with your organization's problem. Let's talk about what is AWS first and then let's bleed into how AWS
became so successful and the different types of services that AWS provides and what's the future of cloud and AWS in
specific. Let's talk about that and finally we'll talk about a use case where you will see how easy it is to
create a web application with AWS. All right, let's talk about what is AWS. AWS or Amazon Web Services is a secure cloud
service platform. It is also pay as you go type billing model where there is no upfront or capital cost. We'll talk
about how soon the service will be available. Well, the service will be available in a matter of seconds. With
AWS, you can also do identity and access management that is authenticating and authorizing a user or a program on the
fly. And almost all the services are available on demand and most of them are available instantaneously. And as we
speak, Amazon offers 100 plus services and this list is growing every new week. Now that would make you wonder how AWS
became so successful. Of course, it's their customers. Let's talk about the list of well-known companies that has
their IT environment in AWS. Adobe. Adobe uses AWS to provide multi-terabte operating environments for its
customers. By integrating its system with AWS cloud, Adobe can focus on deploying and operating its own software
instead of trying to, you know, deploy and manage the infrastructure. Airbnb is another company. It's an community
marketplace that allows property owners and travelers to connect each other for the purpose of renting unique vacation
spaces around the world. And uh the Airbnb community users activities are conducted on the website and through
iPhones and Android applications. Airbnb has a huge infrastructure in AWS and they're almost using all the services in
AWS and are getting benefited from it. Another example would be Autodesk. Autodesk develops software for
engineering, designing and entertainment industries using services like Amazon RDS or relational database service and
Amazon S3 or Amazon simple storage service. Autodesk can focus on deploying or developing its machine learning tools
instead of spending that time on managing the infrastructure. AOL or American online uses AWS and using AWS
they have been able to close data centers and decommission about 14,000 in-house and colloccated servers and
move mission critical workload to the cloud and extend its global reach and save millions of dollars on energy
resources. Bit Defender is an internet security software firm and their portfolio of softwares include antiirus
and anti- spyware products. Bit Defender uses EC2 and they're currently running few hundred instances that handle about
5 terabytes of data and they also use elastic load balancer to load balance the connection coming in to those
instances across availability zones and they provide seamless global delivery of service. Because of that, the BMW group,
it uses AWS for its new connected car application that collects sensor data from BMW 7 series cars to give drivers
dynamically updated map information. Canons offers imaging products division benefits from faster deployment times,
lower cost, and global reach by using AWS to deliver cloud-based services such as mobile print. The office imaging
products division uses AWS such as Amazon S3 and Amazon RA 53, Amazon CloudFront and Amazon IM for their
testing, development, and production services. Comcast, it's the world's largest cable company and the leading
provider of internet service in the United States. Comcast uses AWS in a hybrid environment. Out of all the other
cloud providers, Comcast chose AWS for its flexibility and scalable hybrid infrastructure. Docker is a company
that's helping redefine the way developers build, ship, and run applications. This company focuses on
making use of containers for this purpose. And in AWS, the service called Amazon EC2 container service is helping
them achieve it. The ESA or European Space Agency. Although much of ESA's work is done by satellites, some of the
programs, data, storage, and computing infrastructure is built on Amazon Web Services. ESA chose AWS because of its
economical pay as you go system as well as its quick startup time. The Guardian newspaper uses AWS and it uses a wide
range of AWS services including Amazon Kinesis, Amazon Redshift that power an analytic dashboard which editors use to
see how stories are trending in real time. Financial Times FT is one of the world's largest leading business news
organization and they used Amazon Redshift to perform their analysis. A funny thing happened. Amazon Redshift
performed so quickly that some analysis thought it was malfunctioning. They were used to running queries overnight and
they found that the results were indeed correct just as much faster. By using Amazon Red Shift, FD is supporting the
same business functions with costs that are 80%age lower than what was before. General Electric GE is at the moment, as
we speak, migrating more than 9,000 workloads, including 300 desperate ERP systems to AWS while reducing its data
center footprint from 34 to 4 over the next 3 years. Similarly, Harvard Medical School, HTC, IMDb, McDonald's, NAZA,
Kelloggs and lot more are using the services Amazon provides and are getting benefited from it. And this huge success
and customer portfolio is just the tip of the iceberg. And if we think why so many adapt AWS and if we let AWS answer
that question, this is what AWS would say. People are adopting AWS because of the security and durability of the data
and end-to-end privacy and encryption of the data and storage experience. We can also rely on AWS way of doing things by
using the AWS tools and techniques and suggested best practices built upon the years of experience it has gained.
Flexibility. There is a greater flexibility in AWS that allows us to select the OS language and database.
Easy to use. Swiftness in deploying. We can host our applications quickly in AWS be a new application or migrating an
existing application into AWS. Scalability. The application can be easily scaled up or scaled down
depending on the user requirement. Costsaving. We only pay for the compute power, storage and other resources you
use and that to any long-term commitments. Now let's talk about the different types of services that AWS
provides. The services that we talk about fall in any of the following categories you see like you know compute
storage database security customer engagement desktop and streaming machine learning developers tools stuff like
that and if you do not see the service that you're looking for it's probably is because AWS is creating it as we speak
now let's look at some of them that are very commonly used within computer services we have Amazon EC2 Amazon
Elastic Beantock Amazon light sale and Amazon Lambda Amazon EC2 provides compute capacity in the cloud. Now this
capacity is secure and it is resizable based on the user's requirement. Now look at this. The requirement for the
web traffic keeps changing and behind the scenes in the cloud EC2 can expand its environment to three instances and
during no load it can shrink its environment to just one resource. Elastic beantock it helps us to scale
and deploy web applications and it's made with a number of programming languages. Elastic beantock is also an
easytouse service for deploying and scaling web applications and services deployed be it in java.net NET, PHP,
NodeJS, Python, Ruby, Docker and lot other familiar services such as Apache, Passenger and IIS. We can simply upload
our code and elastic beantock automatically handles the deployment from capacity provisioning to load
balancing to autoscaling to application health monitoring and Amazon lights is a virtual private server which is easy to
launch and easy to manage. Amazon lights is the easiest way to get started with AWS for developers who just need a
virtual private server. Light includes everything you need to launch your project quickly on a virtual machine
like SSD based storage, a virtual machine, tools for data transfer, DNS management and a static IP and that too
for a very low and predictable price. AWS Lambda has taken cloud computing services to a whole new level. It allows
us to pay only for the compute time. No need for provisioning and managing servers. And AWS Lambda is a compute
service that lets us run code without provisioning or managing servers. Lambda executes your code only when needed and
scales automatically from few requests per day to thousands per second. You pay only for the compute time you consume.
There is no charge when your code is not running. Let's look at some storage services that Amazon provides like
Amazon S3, Amazon Glacier, Amazon EBS, and Amazon Elastic File System. Amazon S3 is an object storage that can store
and retrive data from anywhere. Websites, mobile apps, IoT sensors, and so on can easily use Amazon S3 to store
and retrive data. It's an object storage built to store and retrive any amount of data from anywhere. With its features
like flexibility in managing data and the durability it provides and the security that it provides, Amazon simple
storage service or S3 is a storage for the internet. and Glacier. Glacier is a cloud storage service that's used for
archiving data and long-term backups. And this Glacier is an secure, durable, and extremely lowcost cloud storage
service for data archiving and long-term backups. Amazon EBS, Amazon Elastic Block Store provides block store volumes
for the instances of EC2. And this elastic block store is highly available and a reliable storage volume that can
be attached to any running instance that is in the same availability zone. ABS volumes that are attached to the EC2
instances are exposed as storage volumes that persistent independently from the lifetime of the instance and Amazon
elastic file system or EFS provides an elastic file storage which can be used with AWS cloud service and resources
that are on premises and Amazon elastic file system it's an simple it's scalable it's an elastic file storage for use
with Amazon cloud services and for on premises resources it's easy to use and offers offers a simple interface that
allows you to create and configure file systems quickly and easily. Amazon file system is built to elastically scale on
demand without disturbing the application growing and shrinking automatically as you add and remove
files. Your application have the storage they need and when they need it now let's talk about databases. The two
major database flavors are Amazon RDS and Amazon Redshift. Amazon RDS it really eases the process involved in
setting up operating and scaling a relational database in the cloud. Amazon RDS provides costefficient and resizable
capacity while automating time consuming administrative tasks such as hardware provisioning, database setup, patching
and backups. It sort of frees us from managing the hardware and sort of helps us to focus on the application. It's
also cost effective and resizable and it's also optimized for memory performance and input and output
operations. Not only that, it also automates most of the services like taking backups, you know, monitoring
stuff like that. It automates most of those services. Amazon Redshift. Amazon Redshift is a data warehousing service
that enables users to analyze the data using SQL and other business intelligent tools. Amazon red shift is an fast and
fully managed data warehouse that makes it simple and cost-effective analyze all your data using standard SQL and your
existing business intelligent tools. It also allows you to run complex analytic queries against pabyte of structured
data using sophisticated query optimizations and most of the results they generally come back in seconds. All
right, let's quickly talk about some more services that AWS offers. There are a lot more services that AWS provides,
but we're going to look at some more services that are widely used. AWS application discovery services help
enterprise customers plan migration projects by gathering information about their on- premises data centers. You
know, planning a data center migration can involve thousands of workloads. They are often deeply interdependent. Server
utilization data and dependency mapping are important early first step in migration process. And this AWS
application discovery service collects and presents configuration, usage, and behavior data from your servers to help
you better understand your workloads. Route 53, it's a network and content delivery service. It's an highly
available and scalable cloud domain name system or DNS service. And Amazon Route 53 is fully compliant with IPv6 as well.
Elastic load balancing, it's also a network and content delivery service. Elastic load balancing automatically
distributes incoming application traffic across multiple targets such as Amazon EC2 instance containers and IP
addresses. It can handle the varying load of your application traffic in a single availability zones and also
across availability zones. AWS autoscaling it monitors your application and automatically adjusts the capacity
to maintain steady and predictable performance at a lowest possible cost. Using AWS autoscaling, it's easy to set
up application scaling for multiple resources across multiple services in minutes. Autoscaling can be applied to
web services and also for DB services. AWS identity and access management. It enables you to manage access to AWS
services and resources securely using IM. You can create and manage AWS users and groups and use permissions to allow
and deny their access to AWS resources. And moreover, it's a free service. Now, let's talk about the future of AWS.
Well, let me tell you something. Cloud is here to stay. Here's what in store for AWS in the future. As years pass by,
we're going to have a variety of cloud applications born like IoT, artificial intelligence, business intelligence,
serverless computing and so on. Cloud will also expand into other markets like healthcare, banking, space, automated
cars and so on. As I was mentioning some time back, lot or greater focus will be given to artificial intelligence and
eventually because of the flexibility and advantage that cloud provides, we're going to see lot of companies moving
into the cloud. All right, let's now talk about how easy it is to deploy an web application in the cloud. So the
scenario here is that our users like a product and we need to have a mechanism to receive input from them about their
likes and dislikes and uh you know give them the appropriate product as per their need. All right. Though the setup
and the environment it sort of looks complicated. We don't have to worry because AWS has tools and technologies
which can help us to achieve it. Now we're going to use services like Route 53, services like Cloudatch, EC2, S3 and
lot more. And all these put together are going to give an application that's fully functionable and uh an application
that's going to receive the information uh like using the services like route 53, cloudatch, EC2 and S3. We're going
to create an application and that's going to meet our need. So back to our original requirement, all I want is to
deploy a web application for a product that keeps our users updated about the happenings and the new comingings in the
market. And to fulfill this requirement, here is all the services we would need. EC2 here is used for provisioning the
computational power needed for this application and EC2 has a vast variety of family and types that we can pick
from for the types of workloads and also for the intents of the workloads. We're also going to use S3 for storage and S3
provides any additional storage requirement for the resources or any additional storage requirement for the
web applications. And we're also going to use cloudatch for monitoring the environment and cloudatch monitors the
application and the environment and it uh provides trigger for scaling in and scaling out the infrastructure. And
we're also going to use route 53 for DNS and route 53 helps us to register the domain name for our web application. And
with all the tools and technologies together, all of them put together, we're going to make an application, a
perfect application that caters our need. All right. So, I'm going to use Elastic Beantock for this project. And
the name of the application is going to be, as you see, GSG signup. And the environment name is GSG signup
environment one. Let me also pick a name, right? Let me see if this name is available. Yes, that's available. That's
the domain name. So, let me pick that. And the application that I have is going to run on NodeJS. So let me pick that
platform and launch. Now as you see elastic beanto this is going to launch an instance. It's going to launch u the
monitoring setup or the monitoring environment. It's going to create a load balancer as well and it's going to take
care of all the security features needed for this application. All right, look at that. I was able to
go to that URL which is what we gave and it's now having an default page shown up meaning all the dependencies for the
software is installed and it's just waiting for me to upload the code or in specific the page required. So let's do
that. Let me upload the code. I already have the code saved
here. That's my code and that's going to take some time. All right, it has done its thing. And now if I go to the same
URL, look at that. I'm being thrown an advertisement page. All right, so if I sign up with my name, email, and stuff
like that. you know, it's going to receive the information and it's going to send an email to the owner saying
that somebody had subscribed to your service. That's the default feature of this app. Look at that email to the
owner saying that somebody had subscribed to your app and this is their email address, stuff like that. Not only
that, it's also going to create an entry in the database. And DynamoB is the service that this application uses to
store data. There's my Dynamo DB. And if I go to tables, all right, and go to items, I'm going to see that a user with
name Samuel and email address so and so has said okay or has shown interest in the preview of my site or product. So
this is where or this is how I collect those information. Right? And some more things about the infrastructure itself
is it is running behind an load balancer. Look at that. It had created a load balancer. It had also created an
autoscaling group. Now that's the feature of elastic load balancer that we have chosen. It has created an
autoscaling group. And now let's put this URL. You see this it's it's not a fancy URL. All right. It's an Amazon
given URL, a dynamic URL. So let's put this URL behind our DNS. Let's do that. So go to services, go to route
53, go to hosted zone, and there we can find the DNS name. Right. So that's a DNS name. All right. All
right, let's create an entry and map that URL to our load balancer. Right, and create. Now,
technically, if I go to this URL, it should take me to that application. All right, look at that. I went to my custom
URL and now that's pointed to my application. Previously my application was having a random URL and now it's
having a custom URL. So what did we learn? We started the session with what is AWS. We looked at features and tools,
technologies, products that AWS provides and we also looked at how AWS became very successful. Again we looked into
the benefits and features of AWS in depth and we also looked at some of the services that AWS provides in random and
then we picked particular services and we talked about them like EC2 elastic beanto light sale lambda storage stuff
like that. Then we also looked at the future of AWS what AWS holds in the store for us. We looked at that and then
finally we looked at a lab in which we created an application using elastic beanto and all that we had to do was a
couple of clicks and boom an application was there available that was connected to um the database and that was
connected to the simple notification system that was connected to cloudatch that was connected to storage stuff like
that what is Azure what's the big cloud service provider all about so Azure is a cloud computing platform provided by
Microsoft Now it's basically an online portal through which you can access and manage resources and services. Now
resources and services are nothing but you know you can store your data and you can transform the data using services
that Microsoft provides. Again all you need is the internet and being able to connect to the Azure portal. Then you
get access to all of the resources and their services. In case you want to know more about how it's different from its
rival which is AWS, I suggest you click on the top right corner and watch the AWS versus Azure video so that you can
clearly tell how both these cloud service providers are different from each other. Now, here are some things
that you need to know about Azure. It was launched in February 1st, 2010, which is significantly later than when
AWS was launched. It's free to start and has a pay-per-use model which means like I said before you need to pay for the
services you use through Azure and one of the most important selling points is that 80% of Fortune 500 companies use
Azure services which means that most of the bigger companies of the world actually recommend using Azure and then
Azure supports a wide variety of programming languages the C nojs Java and so much more. Another very important
selling point of Azure is the amount of data centers it has across the world. Now it's important for a cloud service
provider to have many data centers around the world because it means that they can provide their services to a
wider audience. Now Azure has 42 which is more than any cloud service provider has at the moment. It expects to have 12
more in a period of time which brings its total number of regions it covers to 54. Now let's talk about Azure services.
Now, Azure services have 18 categories and more than 200 services. So, we clearly can't go through all of them. It
has services that cover compute, a machine learning, integration, management tools, identity, DevOps, web,
and so much more. You're going to have a hard time trying to find a domain that Azure doesn't cover. And if it doesn't
cover it now, you can be certain they're working on it as we speak. So, first, let's start with the compute services.
First, virtual machine. With this service, what you're getting to do is to create a virtual machine of Linux or
Windows operating system. It's easily configurable. You can add RAM, you can decrease RAM, you can add storage,
remove it. All of it is possible in a matter of seconds. Now, let's talk about the second service cloud service. Now,
with this you can create a application within the cloud and all of the work after you deploy it. deploying the
application that is is taken care of by Azure which includes you know provisioning the application, load
balancing, ensuring that the application is in good health and all of the other things are handled by Azure. Next up,
let's talk about service fabric. Now with service fabric, the process of developing a micros service is greatly
simplified. So you might be wondering what exactly is a micros service? Now a micros service is basically an
application that consists of smaller applications coupled together. Next up, functions. Now, with functions, you can
create applications in any programming language that you want. Another very important part is that you don't have to
worry about any hardware components. You don't have to worry what RAM you require or how much storage you require. All of
that is taken care of by Azure. All you need is to provide the code to Azure and it'll execute it and you don't have to
worry about anything else. Now, let's talk about some networking services. First up we have Azure CDN or the
content delivery network. Now the Azure CDN service is basically for delivering web content to users. Now this content
is of high bandwidth and can be transferred or can be delivered to any person across the world. Now these are
actually a network of servers that are placed in strategic positions across the world so that the customers can obtain
this data as fast as possible. Next up we have express route. Now with this you can actually connect your on-premise
network onto the Microsoft cloud or any of the services that you want through a private connection. So the only
communication that happens is between your on-premise network and the service that you want. Then you have virtual
network. Now with virtual network you can have any of the Azure services communicate with each other in a secure
manner in a private manner. Next we have Azure DNS. So Azure DNS is a hosting service which allows you to host their
DNS or domain name system domains in Azure. So you can host your application using Azure DNS. Now for the storage
services. First up we have disk storage. With this storage you're given a cost-effective option of choosing HDD or
solidstate drives to go along with your virtual machines based on your requirements. Then you have blob
storage. Now this is actually optimized to ensure that they can store massive amounts of unstructured data which can
include text data or even binary data. Next you have file storage which is a managed file storage and can be
accessible via the SMB protocol or the server message block protocol. And finally you have Q storage. Now with Q
storage you can provide durable message queuing for an extremely large workload. And the most important part is that this
can be accessed from anywhere in the world. Now let's talk about how Azour can be used. Firstly for application
development. It could be any application mostly web applications. Then you can test the application see how well it
works. You can host the application on the internet. You can create virtual machines. Like I mentioned before with
the service you can create these virtual machines of any size or RAM that you want. You can integrate and sync
features. You can collect and store metrices. For example, how the data works, how the current data is, how you
can improve upon it. All of that is possible with these services and you have virtual hard drives which is an
extension of the virtual machines where these services are able to provide you a large amount of storage where data can
be stored. Talk about Azure in great length and breadth and if you're looking for a video that talks and walks you
through all the services in Azure then this could be one of the best video you could find in the internet. And without
any further delay let's get started. Everybody likes stories it. So let's get started with a story. In a city not so
far away, a CEO had plans to expand his company globally and called one of his IT personnel for an IT opinion. And this
guy has been in the company for a long time and is very seasoned with the company's infra and he nicely answered
the questions with what he foresaw and he said I have a good news and a bad news for us to go global. And he starts
with the good news. He said, "Sir, we're well on our way to become one of the world's largest shipping company." And
the bad news is, however, our data centers have almost run out of space and setting up new ones around the world
would be too expensive and very timeconuming. Now, the IT personnel, let's call him Mike. Now, he explains
the situation from how he saw it. But the CEO had done some homework about how he was going to do it and he answered
Mike saying, "Don't worry about that, Mike. I've come up with a solution for a problem and it's called Microsoft
Azure." Well, Mike is an hardworking and honest IT professional working for that company, but he did not spend time on
learning the latest technologies and he asked this question very honestly. Oh, how does it solve a problem? And the CEO
begins to explain Azure to Mike and he starts with what is cloud computing and then he goes on and talks about Azure
and the services offered by Azure and why Azure is better than the other cloud providers and what are the great
companies that uses Azure and how they got benefited out of it and then he winds it all up with the use cases of
Azure. So he begins his explanation saying Microsoft Azure is known as the cloud service provider and it works on
the basis of cloud computing. Now Microsoft Azure is formerly known as Windows Azure and it's uh Microsoft's
public cloud computing platform. It also provides a range of cloud services including some of them are compute
analytics storage and networking. We can always pick and choose from these services to develop and scale our
applications or even plan on running existing applications in the public cloud. Microsoft Azure is both a
platform as a service and infrastructure as a service. Let's now fit their conversation out and let's talk about
what is cloud computing Azure services offered by Azure. How is Azure leading when compared to other cloud service
providers and what are the companies that are using Azure? Let's talk about that. In simple terms, cloud computing
is being able to access compute services like servers, storage, database, networking, software analytics,
intelligence and lot more over the internet which is the cloud. with the uh flexibility of the resources that we use
like anytime I want a resource I can use one and it becomes available immediately and anytime if I want to retire an
resource I can simply retire a resource and not pay for it and we also typically pay only for the services that we use
and this helps greatly with our operating cost to run our infrastructure more efficiently and scale our
environment up or down depending on the business needs and changes. And all the servers and stoages and databases and
networking all that are accessed through the network of remote systems or remote computers hosted in the internet
typically in the provider's data center which is Azure in this case. Now we don't use any physical server or an
onremises server here. Well, we still use physical servers and VMs, you know, hosted on a hardware or a physical
server, but they're all in the provider environment and none of them sit on premises or in our data center. We only
access them remotely. It looks and feels the same except for the fact that they are in a remote location. We access them
remotely, do all the work remotely, and when we're done, we can shut it down and not pay for them. So some of the use
cases some of the use cases of cloud computing are creating applications and services. The other use cases are
storing or using cloud for storage alone. If there is one thing that ever grows in an organization is the storage.
Every new day there is a new storage requirement and it's very dynamic. It's very hard to predict. And if we go out
and buy a big storage capacity up front until we use the storage capacity fully, the empty stoages, you know, we're
wasting money on them. So instead, I can go for a storage which scales dynamically that's in the cloud. Put
storage or put data in the cloud and pay only for what you're storing. And for the next month, if you have deleted or
flushed out some files or data, pay less for it. So it's a very dynamic storage in the cloud and a lot of companies are
getting benefited from storing data in the cloud because of its u dynamic in nature and the cost that comes along
with it the cheap cost that comes along with it and also they give a lot of the providers like Azure they give uh data
replication for free they promise an SLA along with the data we store in the cloud so there's an SLA attached to it
and they also also provide data recoveries as well. If in case something goes wrong with the physical disk where
our data is stored, Azure automatically makes our data available from the redundant or other places where it had
stored our data because of the SLA they wanted to keep. The other use case for Azure is hosting websites and running
blogs using the compute service. Be it storing music and letting your users stream the music, Azure is a good place
to store music and stream the music with the benefit of CDN content delivery network which allows us to stream video
or audio files with great speed. You know with that with Azure our audio or video application works seamlessly
because they are provided to the client with very low latency and that improves the customer experience for our
application. Azure comput service is a good place for delivering software on demand. There are a lot of softwares
embedded softwares that we can buy using Azure and everything on a pay as you go service model. So anytime we need a
software, we can go out and immediately buy the software for the next 1 hour or 2 hour let's say and use them and then
return it back. We're not bound to any yearly licensing cost by that. Azure computing services has analytic
available for us with which we can analyze get a good visualization of what's going on in a network be logs be
the performance be the metrics you know instead of looking at logs and searching logs and trying to do manual things over
the heaps and heaps of logs that we have saved Azure Analytics Services helps us to get a good visual of What's going on
in the network? Where have we dropped? Where have we increased or what's causing what's the major driver? What is
the top 10 errors that we get in the server in the application? Stuff like that. Those can be easily gathered from
the Azure analytic services. Now cloud is really a very cool term for the internet. A good uh analogy would be
looking back. Anytime we look at a diagram when we do not know how things are transferred, we simply draw a cloud.
Right? For example, a mail gets sent from a person in one country to a person in the other country. A lot of things
happening in between from the time you hit the send button and the time the other person hits the read button.
Right? And we the simple and the easiest way of putting it in a picture is simply draw a cloud and on the one end one
person will be sending the email and on the other end the other person will be reading the email. So a cloud is a
really cool term for the internet. Now that's some basics about cloud computing. Now that we've understood
about cloud computing in general, let's talk about Microsoft Azure as a cloud service. Now, Microsoft Azure is a set
of cloud services to build, manage, and deploy applications on a network with the help of Microsoft Azure's
frameworks. Now, Microsoft Azure is a computing service created by Microsoft basically for building, testing,
deploying, and managing applications and services through a global network of Microsoft managed data centers. Now,
Microsoft Azure provides SAS which is software as a service and PAS which is platform as a service and IAS
infrastructure as a service and they support many different programming languages tools and framework and those
tools and framework include both Microsoft specific and third party software. Now let me pick and talk about
a specific service for example management. Azure automation provides a way for us to automate the manual long
running and frequently repeated task that are commonly performed task both in cloud and enterprise environment. It
saves us a lot of time and increases the reliability and it kind of gives a good administrative control and even
schedules the task automatically to be performed on a regular basis. To give you a quick history of Microsoft Azure,
it was launched on 1st February 2010 and it was awarded or it was called an industry leader for infrastructure and
platform as a service by Gartner. Now Gartner is the world's leading research and advisory company. This Microsoft
Azure supports a number of programming languages like C, Java and Python. All these cool services we get to use and
pay only for how much we use. For example, if we use for an hour, we only get to pay for an hour. Even the
costliest system available. And if we use them for an hour, we only pay for that particular hour. And then we're
done. No more billing on the resource that we have used. Microsoft Azure has spread itself more than 50 regions
around the world. So it's quite easy for us to pick a region and you know start provisioning and running our
applications probably from day one because the infrastructure and the tools and technologies needed to run our
application are already available. All that we have to do is commit the code in that particular region or build an
application and launch it in that particular region and they become live starting day one. Now because we have 50
regions around the world, we can very carefully design our environment to provide low latency services to our
customers. All right? Instead of in traditional data center let's say you know customers will have to or their
request will have to travel all the way around the globe to reach a data center which lives in the other side of the
planet and this adds more latency to it and it is really not feasible to build a data center uh near each customer
location because of the cost involved but with Azure it's possible. Azure already has data centers around the
world and all that we have to do is just pick a data center, build an environment there. They're available starting day
one. Number one, and also the cost is considerably saved because we are using a public cloud instead of an physical
infrastructure to serve those customers from a very local location. And the services that Azure is offering is ever
increasing. As of now, as we speak, we have like 200 plus services offered and uh they span through different domain or
different platform or different technologies available within the Azure console portal. Now, we're going to talk
about that later in this section. So, hold your breath till we talk about it. But for now, just know that we have like
200 plus services offered by Azure. Let's now talk about different services in Azure. Starting with artificial
intelligence plus machine learning where we have a lot of tools and technologies. So the wide variety of
services available in Azure includes artificial intelligence plus machine learning plus analytic services to get
an or to give us a good visual of how the data or how the application is performing or the type of the category
of data stored and to read from the logs. and variety of compute services, different VMs with different size and
different operating systems, different containers available, different type of databases available, a lot of developer
tools that are available for us and identity service to manage our users in the Azure cloud and those users can be
integrated or federated with let's say Google, Facebook, you know, LinkedIn. So there are some external federation
services they can be used to integrate with our identity system IoT's IoT services IoT tools and technologies
available and management tools to manage the users you know creating identity is one and then managing them on top of it
is a totally different thing and we have tools technologies to manage the uh users cool services for data migration
data migration is now made simple tools and technologies available for mobile application uh development and I can
plan my own network in the cloud with the networking services I can implement my own security both Azure provided and
third party security services on Azure cloud that's now possible and lot of storage options available in the cloud
so these are just a glimpse of the big list of services available in Azure cloud
So that was a glimpse of what's available in the cloud. Let's talk about the services in a specific. Let's take
compute for example. You know whenever we're building a new application or deploying existing ones. The Azure
compute service provides the infrastructure we need to run and maintain our application. We can easily
tap in the capacity that Azure cloud service has and we can scale our compute requirement on demand. We can also
containerize our application. We have the option of choosing Windows or Linux v machine and take the advantage of the
flexible options Azure provides for us to migrate our VMs to Azure and lot more. And these compute services also
include a full-fledged identity solution meaning integration with active directory in the cloud or in on premises
and lot more. Let's look at some of the services that this compute domain provides. Some of the services the
compute domain provides are virtual machines. And this Azure virtual machines gives us the ability to develop
and manage a virtual computer environment or a virtualized environment inside Azure's cloud environment that in
a virtual private network. Now we will talk about virtual private network at a later point but as of now just uh know
that there are a lot of services available in Azure compute service that we can get benefited from. We can always
choose from a very wide range of uh compute options. For example, you know we have an option to choose the
operating system. We have the option to choose whether the system should be in on premises or in the cloud or do we
want to maintain the environment both in on premises and in the cloud. we have the option of choosing the operating
system whether we want to use our own operating system with some software attached uh to it or do we want to go
and buy the operating system from the cloud from Azure marketplace and these are just a few of the options available
for us when we want to buy the compute environment and these compute environments are easily scalable meaning
we can easily scale our VM instances from one instance to thousands thousands of virtual machines in a matter of
minutes or simply put in a couple of button clicks and all these services are available on a pay for what we use
model. Meaning there is no upfront cost. We use the service and then pay for the services that we have used. There's no
literal long-term commitment when it comes to using virtual machines in the cloud. And these most of the services
are built on a pay-per-minut billing basis. All right. And at no point because of the pay-per-
minute billing model, at no point we will be overpaying for any of the services. That's that's attractive,
isn't it? Now, let's talk about batch service. Now, batch service is always uh independent. Regardless of whether you
choose Windows or Linux, it's going to run fairly well. And with batch service we can take advantage of the uh
environment's unique features and not only that in short the batch service helps us to manage the whole batch
environment and also it helps to schedule the jobs. Now this Azure batch service is actually runs on a large
scale parallel and high performance computing. Because of that batch jobs are highly efficient in Azure. And when
we run batch services, this Azure batch creates a pool of computer nodes and uh installs the needed applications that we
want to run and then it schedules jobs to those individual nodes in those pools. As a customer, there is no need
for us to install a cluster or there is no need for us to install a software that actually schedules the jobs or even
to manage or even to scale those infrastructure or the uh software because everything is managed by Azure.
And this batch service is a platform as a service. There is no additional charge for using this batch service except for
I mean the only charges that we'll be paying is for the virtual machines that this service uses and uh the storage
that we will be using of course and uh the networking services that we will be using for this batch service. Let's
summarize this batch service. We have a choice of operating system that we can pick and use and it scales by itself.
Now the alternative for the batch would be cues but in cues we'll have to pre-provision and pay for the
infrastructure even if we're not using it but with a batch we only pay for what we use and this batch service helps us
to manage uh the application manage the scheduleuling as a whole as if they are just one thing as next thing in compute
domain let's talk about this fabric service now this fabric service is actually a distributed system platform
that helps us to package, deploy and manage a scalable and a very reliable micros service and containers. And what
does it help? This Azure fabric service helps us or it helps the developers and administrators so they can avoid the
complex infrastructure problems and they can focus only on implementing workloads or taking care of their development
taking care of their application instead of spending time on infrastructure. So what's service fabric? service fabric.
It provides runtime capabilities and uh life cycle management to applications that are composed of microservices. No
infrastructure management at all. And with service fabric, we can easily scale the application to tens or hundreds or
even to thousands of machines. Here machines represent containers. As next thing in compute domain, let's talk
about virtual machine scale set. Now this virtual machine scale set it lets us to create a group of identical load
balanced VMs. I just want to mention it again. It helps us to manage a group identical and load balanced VMs. The
number of instances or the number of VM instances in an in a scale set can increase or decrease in response to uh
the demand or in response to a schedule that we define. you know the resources needed on a Monday morning is not the
same as that would be required on a Saturday or a Sunday morning. All right. And even within the day the resources
that would be needed in the beginning of the business hour is not the resources that would be needed at noon or you know
after 8 or 9 in the evening. So the demands could actually vary in the environment and the skill set helps us
to take care of the varying demand or take care of the uh different infrastructure requirement at a
different schedule throughout the day throughout the week throughout the month or could be throughout the year as well.
The scale set also allows us to provide high availability to our applications and it helps us to uh centrally manage
configure and update a large number of VMs as if they they are just one thing. Now you might ask well virtual machines
are enough. Why would we need a virtual machine scale set? Just like I said this virtual machine skill set helps us uh
with uh a greater redundancy and improved performance for our applications and those applications can
be accessed through a load balancer that actually distributes uh the requests to the application instances. So in a
nutshell this virtual machine scale set it helps us to create a large number of identical virtual machines. number one.
And with scale set, we can increase or decrease the virtual machines. With virtual machine scale set, we can
centrally manage and configure and update a big group of VMs. And it's a great use case when it comes to big data
or container workloads. As next thing in compute domain, uh let's talk about cloud services. Now, this Azure cloud
service is actually a platform as a service and it's very friendly. In fact, it is designed for applications that
support scalability or an application that requires scalability or reliability and and on top of it, you want them to
be very inexpensive to operate. So, Azure cloud service provides all these. So, where would this cloud service run?
Well, it runs on a VM, but it's a platform as a service. VMs are infrastructure as a service. And when we
run applications on VM through cloud service, it becomes platform as a service. So here is how you got to be
thinking with infrastructure as a service like VMs. We first create and configure the environment and then we
run applications on top of it. Let's look at the responsibility. The responsibility for us in VM is that we
manage everything end to end like uh you know deploying new patches, picking the versions of the operating system and
making sure they are uh intact and all that stuff. It's all managed by us. But on the contrary with platform as a
service it's I mean it's as if the environment is already ready. All that you have to do is deploy your
application in it and manage the platform. I mean manage the platform not as an administrator because all the
administration is taken care by Azure like uh you know deploying new versions of the operating system. It's all
handled by the Azure. So we deploy the application and we manage the application. That's it. infrastructure
management is handled by Azure. So what does cloud service provide? This cloud service provides a platform uh where we
can uh write the uh application code and we don't have to worry about hardware. Simply hand over the code and cloud
service takes care of it. So no worry on the hardware at all. So responsibilities like patching, what do we do if
something uh crashes, how do I update the infrastructure, how do I uh manage uh the maintenance or the downtime in
the underlying infrastructure. All that is handled by Azure. It also provides an good testing environment for us. You
know, we can simply run the code, test it before it's actually released to the production. I want to expand a bit on
these testing applications. So this Azure cloud service it actually gives us an staging environment for testing a new
release without it affecting the existing release which actually reduces the customer downtime. So we can run the
application, test it, and anytime that's ready for production. All that's needed for us to do to move it to production is
simply to swap the staging environment into the production environment and the old production environment will now
become the new staging environment where we can uh add more to it and then swap it back at a later point. So it it kind
of gives us an swappable environment for testing our applications. And not only that, it gives us health monitoring
alerts. It helps us to monitor the health and availability of our application. There is a dashboard we can
benefit from uh when we use Azure cloud services and that shows the key statistics all in one place. And we can
also set up realtime alerts to warn when a service availability or a certain metrics that we are concerned about
degrades. As next thing in compute domain, let's talk about functions. Now functions are serverless computing. Now
many time if you heard about Azure being serverless a lot of time they are referencing or the person who's talking
to you is referencing to serverless uh computing or Azure functions which is a serverless computing service hosted on
Microsoft Azure. The main motive of u a function is to accelerate and simplify application development. Functions helps
us to run code on demand without we need to pre-provision or manage any Azure infrastructure. So Azure functions are
script or a piece of code that gets run in response to an event that you want to handle. So in short, we can just write a
code that you need for a problem at hand without actually worrying about the whole application or the infrastructure
that will be running uh that code. And the best of all the best is when we use functions, we only pay for the time that
our code runs. So what does functions provide or what does Azure functions provide? Azure functions allow users to
build applications using serverless uh simple functions with a programming language of our choice. So the current
programming languages that are supported is C, F, NodeJS, Java and PHP. So here we really don't have to worry about
provisioning or uh maintaining servers. If a code requires more resource, yes, Azure functions handles or it provides
the additional resources needed by the code. And the best part is we only pay for the amount of time the functions are
running. Not the resources but the amount of time the function is running. As next thing and moving to the new
domain, let's talk about the container domain in Azure. Now the container domain or the container service it
allows us to quickly deploy a production ready Kubernetes or a docker swarm cluster. Now what's a container? A
container is a standard unit of software that packages of code and all its dependencies. So the applications run
quickly and reliably from one computing environment to another. It could be testing uh to staging to developing
development environment to staging to production or from one production to another production or on premises uh to
cloud or one cloud to another cloud vice versa. Now imagine we had an option not to worry about the VM and just focus on
the application. Well, that's exactly what containers helps us achieve. So these container instances enable us to
focus on applications and not worrying about managing VMs or not worrying about the learning the new tools required to
manage the VMs or even the deployment. And our applications that we create they run in a container and running in a
container is what helps us to achieve all these not being able to manage or not needing to manage the virtual
machines. So these containers uh they can be deployed into the cloud using a single command if you're using a command
line interface and a couple of button clicks if we are using the Azure portal and these containers are kept uh
lightweight but they are equally secure as virtual machines. Let's talk about container services. Next thing uh the
container service or sometimes called as Azure Kubernetes service. It helps us to manage the containers. container is one
thing and a service that's used to manage the container is another thing. Now this Kubernetes service or ACS it
helps us to manage the containers. So let's expand on this a bit. So this Azure container service or ACS it it
actually provides a way uh to simplify the creation, configuration and management of a cluster of virtual
machines that are preconfigured to run containerized applications on top of them. Now deploying them, deploying
these containers might take like 15 to 20 minutes or deploying the virtual machines that run containers in it might
take 15 to 20 minutes. And once they are provisioned, we can actually manage them by using simple SSH tunnel into them.
And this ACS when it runs application, it runs applications from docker images. What does that mean? A docker images
makes sure that the applications the container runs are fully portable. Images are portable and ACS also helps
us to orchestrate the container environment. Not only that, it also helps us to ensure that these
applications that we run in containers can be scaled to thousands or even tens of thousands of containers. So in a
nutshell, managing an existing application into a container and running it using AKS or ACS is really easy or
that's what it is all about to make the application management or migration easy. Now managing the containerbased
architecture and we discussed that containers could be tens or even tens of thousands of containers. So managing
them is made simple using this container services and even training of model using a large data set in a complex and
resource inensive uh environment. This AKS helps us to simplify that uh environment. All right as next thing in
container domain let's talk about container registry. We spoke about registry a little bit when we spoke
about docker images. So container registry is a single place where we can store our images which are docker
images. When we use when we use containers it's it's docker images that we use for our image purposes. So these
container images are a central registry that can be used to ease container development by easing the storage and
management of container images. So there we can store all kind of images like u docker swarm or the images used in
docker swarm are in kubernetes. Everything can be stored in container registry in Azure. Now anytime we store
a container image it provides us an option for geo replication. What that means is that we can efficiently manage
a single registry replicated across multiple regions. Now these georrelication it actually enables us to
manage global deployments assuming we are having an environment that requires a global deployment. So it helps us to
manage global deployments as one entity because we are georrelicating. We would be updating we would be editing one
image and that image gets replicated throughout the global uh replication centers we would have set up and so just
one editing would have actually edited the global images and those global images would have provisioned the global
application. So one edit replication and then provisioning of the applications globalwide. And this replication also
helps us to helps us network latency because you know anytime an application needs to deploy it does not have to rely
on a single source which which can be reached only through high latency network. Because we have global
replications around the world. Anytime the application wants to check back, it would check back uh the application
which is in a very nearby location for the application itself. Global replication means that we are managing
it as a single entity that's being replicated across the multiple regions in the globe. As next thing in a
learning, let's talk about um Azure databases. Now this Azure databases are uh rational in fact they have many
flavors in them. Uh we're going to look at uh different uh flavors. No SQL NoSQL cache type of database that Azure
offers. So we're going to learn one at a time or we're going to learn one by one. So this Azure SQL database is a
relational database. In fact, it's a relational database as a service. It's managed by Azure. We don't get to do a
lot of management in it. So it's a relational database as a service uh based on Microsoft uh SQL server
database engine and this database is a high performance database. It is very reliable and uh it's very secure as well
and this high reliability high performance and for this high security we really don't have to do anything. It
comes along with it and uh it's managed by Azure. And there are two things that I definitely need to mention about Azure
SQL database that is it's an intelligent service. Number one, it's fully managed by Azure. And it also has this one good
thing which is it has built-in intelligence that learns app patterns and adapts to maximize performance and
reliability and data protection of the application. That's something that's not found in uh many of the other cloud
providers that I'm aware of. So I thought I'll mention it. So it uses built-in intelligence to learn about um
the user's database patterns and helps improve performance and protection and migration or importing data is very easy
when it comes to Azure SQL database. So it can be readily or immediately used for analytic reporting and uh
intelligent applications in Azure. As next thing let's talk about Azure Cosmod. Now, Azure Cosmodb is a database
service that is for NoSQL type and uh it's it's created to provide low latency and uh an application that scales
dynamically or that scales rapidly. Now, this Azure Cosmodb is an a globally distributed service and it's a
multimodel database. This can be provisioned in a click of a button. That's all we got to do if we need to
provision an Azure Cosmod in the Azure. It helps with scaling the database. Now we can elastically and independently
scale throughput and storage across this database and in any of the Azure geographic regions. It provides a good
throughput. It provides good latency. It provides good availability and um it provides or uh Azure promises a a
comprehensive SLA that uh no other database can offer. That's the best part about Cosmod. So this Cosmod was built
with global distribution in mind and it's built uh with the horizontal scale in mind and all this we can use by only
paying for what we have used. And remember the difference between Azure Cosmodb and SQL database is that Azure
Cosmod supports NoSQL whereas SQL doesn't. All right. Few other things about Azure Cosmodb is it allows users
to use key value graph column family and document data. It also gives users a number of AP options like SQL,
JavaScript, MongoDB and and few others that you might want to check in the document at at the time of reading. And
the best part here is that all that we mentioned we get to use only by paying for the amount of storage and throughput
that are required and the storage and the throughput can be elastically scaled based on the requirement of that R. All
right, let's talk about um Reddis cache. Discussion about Azure database won't be complete without we talking about Reddis
cache. Now Reddis cache is a secure data cache. It's also called it's also sometimes called as messaging broker
that provides high throughput and low latency access to data for the applications. Now Reddis cache is based
on an a popular open-source caching product which is reddis sometimes called as radius cache. Now what's the use
case? It's typically used to cache to improve the performance and scalability of a system that rely heavily on backend
data stores. Now performance when we use ziscache is improved by temporarily copying the frequently accessed data to
a fast storage located very close to the application. Now with reddis cache this fast storage is located in memory with
reddis cache instead of being loaded from the actual disk in the database itself. Now this reddis cache can also
be used as an in-memory data structure store. Not only that it can be used as an distributed non- relational database
and a message broker. So there are variety of uh use cases for this radius cache. And by using radius cache the
application performance is improved by taking advantage of the low latency and the high throughput performance that
this reddis cache engine provides. So to summarize this radius cache when we use reddis cache data is stored in the
memory instead of the disk to ensure that there is high throughput and low latency when the application needs to
read the data. It provides various levels of scaling without any downtime or interference. Now this radius cache
is actually backed by radus server and it supports u a string hashes linked list and various other data structures.
Now let's talk about security and identity services. Now identity management in specific is a process of
authenticating first and then authorizing using security principles. Now not only that identity management
involves controlling information about those principal identities. You might ask now what's an principal identity?
Now identity or principal identity are services, applications, users, groups and a lot more. The specialtity about uh
this identity management is that it not only helps authenticate and authorize principles in cloud, it also helps
authenticate and authorize principles or resources on premises especially when you run an hybrid cloud environment. So
all these services and features that this identity management helps us to get additional level of validation like
identity management can provide multiffactor authentication. It can provide access policies based on
condition permit or deny based on condition. It can also monitor suspicious activity and not only that it
can also report it. It can also help generate alerts for potential security issues and in a way to mitigate it can
send us an alert so we can get involved and prevent and a security accident from happening. So let's talk more about
identity management. So some of the services under security and identity management are Azure security center.
Now this Azure security center provides uh security management and threat protection across the workloads in both
cloud and in the hybrid environment. It helps control user access and application control to stop any
malicious activity if present. It helps us to find and fix vulnerabilities before they can be even exploited. It
integrates very well with analytic methods that helps us to identify or it gives us the intelligent to identify or
detect attacks and prevent them before it can actually happen. And it also works seamlessly with hybrid
environments. So you don't have to have one policy for on premises and one policy for the cloud. It's now a unified
service both for on premises and the cloud. The next service in security and identity would be keywalt. Now a key
wault is a service or a feature that help safeguard the cryptographic keys and any other secrets used by the cloud
applications and the services. In other words, this Azure key wault is a tool for securely storing and accessing the
secrets of the environment. I mean the secret keys. Now a secret is anything that you really want to have a very
tight control access like the certificates like the passwords stuff like that. Now if I tell you what
keywalt actually solves that would actually explain what keywalt is. Now key walt is used in secrets management.
It helped in securely storing the tokens, the passwords, the certificates. It helps in key management. You know it
really helps in creating and controlling the encryption keys that we would use to encrypt data. It helps in certificate
management. Talking about certification management, it helps us to easily provision, manage and deploy public and
private SSL TLS certificates in Azure and lot more. So in a nutshell, this key wall, it provides users the ability to
provision new walls and keys in just a matter of minutes. All that in a single command or all that in a couple of
button clicks. It also helps users to centrally manage their keys, secrets and policies. Next in the list, let's talk
about Azure Active Directory. Now, Azure Active Directory, it helps us to create intelligent driven access policies to
limit resource usage and manage user identities. What what does that mean? Now, this Azure Active Directory is a
cloudbased active directory and identity management service. Now, Azure Active Directory combines, you know, it's
actually a combination of the core directory services plus application access management plus identity
protection. And one good thing about this Azure, in fact, there are a lot of good things, but especially when you're
running hybrid environments, you might wonder well how this Azure Active Directory is going to behave. Now this
Azure Active Directory is built to work on on premises and cloud environment as well. Not only that, it also works
seamlessly with mobile applications as well. So in a nutshell, this Azure Active Directory, it acts as an central
point of identity and access management for our cloud environment. It also provides good security solutions that
protect against unauthorized access of our app and the data. Now that we've discussed about security and identity,
let's talk about the management tools that Azure has to offer. Azure provides builtin management and account
governance tools that helps administrators and developers that helps them to keep their resources secure and
very compliant and again it helps both in on premises and in the cloud environment. And these management tools
help us to monitor the infrastructure, monitor the applications. It also helps in provisioning and configuring
resources. It also helps in updating apps. It helps in analyzing threats, taking backup of the resources, build uh
disaster recoveries. It also helps in applying policies and conditions to automate our environment. we use u Azure
management tools and it's also used in cost control methods. So this Azure management plays a wide role across the
Azure services and in the management tools first comes the Azure advisor. Now this Azure advisor it acts as a guide to
educate us about Azure best practices. It throws recommendations that we can select on the basis of the category of
service and it also provides the impact it can have or the impact that would happen in our environment if we follow
the recommendations given and recommendations are uh first one is the recommendations are kind of templatized
and it throws the templatized recommendations. Not only that, it also provides customized uh recommendations
on the basis of the configuration, on the basis of our usage patterns. And these recommendations are not hard. It's
not like something that it recommends and then just leaves us hanging there. These recommendations provided are very
easy to follow, very easy to implement and see results. You can think of Azure advisor as an a very personalized cloud
consultant that helps you to follow best practices to optimize our deployments. It kind of analyzes our resources, our
configurations, our usage and then it recommends a solution for us that really helps in improving the cost
effectiveness, improving the performance, improving high availability and improving security in our Azure
environment. So with this Azure advisor, we can get a proactive, actionable and personalized best practice
recommendations. Now you don't have to be an expert. Just follow the Azure advisor and your environment is going to
be good. It also helps in improve the performance, security, high availability of our environment. And also it helps in
bringing down the overall Azure spend. And the best part is it's a free service that analyzes our Azure usage and
provides recommendations how we can optimize our Azure resource to reduce cost and reduce cost at the same time
boost the performance helps in strengthening the security and improve the overall reliability of our
environment. Next in the list would be network watcher. Watcher helps users identify and gain insights in the
overall network performance and the health of the overall environment. Now these Azure watchers provides enough
tools to monitor to diagnose to view the metrics and to enable or disable logs which means you know generate and
collect the logs for resources in the Azure virtual network. So with network watcher can monitor and diagnose issues
in networking without even logging into the virtual machines with just the logs which are real time we can actually come
to a conclusion what could be wrong in a certain resource in a VM or in a database you know but just looking at
the logs and not only that it's used for analytic or to gain some intelligence of what's happening in our network we can
gain a lot of insight to the current network traffic pattern using the security group flow logs that this
network watcher offers. It also helps in investigating VPN connectivity issues using detailed logs. Now you might or
might not know that you know VPN troubleshooting requires both parties or it involves two parties. you know the
person the network administrator on this side and the network administrator on the other side and they will have to
check logs in their end and we'll have to check logs and our end stuff like that but with the network watcher it
kind of takes it to the next level the logs itself we could easily identify which side is having the issue and
suggest an appropriate fix and the next in the list would be Microsoft Azure portal now this Microsoft Azure portal
it provid provides a a single unified console to perform various number of activities like building not only
building managing and monitoring the web applications that we build. Now this portal can be used to organize our
environment or the appearance of the environment or the visual of the environment based on our work style. And
using Azure portal, users can control who gets to manage or access the resources all from the Azure portal. And
this Azure portal gives a very good visibility on the spends that happen on each resource, right? And if we can
customize it, we can also identify spends based on team, spends based on days, spends based on department, stuff
like that. So it kind of gives us a good visual of where the money is spending or where is the bill consumed within the
Azure environment. Next in the list would be Azure resource manager. Now Azure resource manager enables us to
manage the usage of the application resources. Now we use resource manager to deploy, monitor and manage solution
resources as a group as if it's one single entity. Now the infrastructure of our application is typically made of
various components which includes virtual machine storage virtual network web app database servers some other
third party services that we might use in our environment and they are by nature separate services but with Azure
resource manager we don't see them as different components or different entities instead we see them as related
services in uh group that supports an application. Now we kind of get the relation between them instead of you
know letting them spread. Azure resource manager identifies the relation between them and helps us to visually see them
all as one or single entity. Not only that, Azure resource manager helps or it ensures that the resources that we
provision or deploy at a constant rate along with the other application. It also helps users to visually see their
resources and how they are connected and that helps in managing the resources a lot better. Resource group also is used
to control who can access the resources within the users's organization. Kind of gives you the fine grained control over
who gets to access and who does not get access. And the last one in the management tools would be automation.
And this automation gives us the ability to automate, configure and install upgrades across hybrid environments. It
provides a cloud-based automation and configuration service. Not only that, this can be applied for non-asure
environments as well which is on premises. So some of the automation we could do is process automation, update
management automation, configuration features automation, stuff like that. And this Azure automation provides
complete control during the deployment operation and also during the decommissioning of the workloads and
resources. With automation we can actually automate uh time consuming or mundane or any task that's errorprone
because of uh human errors those things can be automated. So irrespective of how many times you run it, it's going to run
the same way and that really helps in reducing the overall time and also the overhead cost because a lot of the
things are automated which means it's human error-free which means the application is not going to break and
keep running for a longer time. With automation we can actually build a good inventory of operating system resources
and configuration items all in one place with ease. And this really helps in tracking the changes and investigating
the issue. Let's say something happened because we have automation because it's logging the configuration changes. It's
easy to track, easy to identify, easy to identify what has changed lately that has broken the environment, go back and
fix it or kind of roll it back. That solves the problem. And that actually summarizes the Azure management tools or
management services. Now let's talk about the networking tools or the networking services available in Azure.
There are variety of services especially networking services that Azure offers and I'm sure it's going to be an
interesting one. Let's begin our discussion with content delivery network. Now the content delivery
network in short CDN it allows us to perform secure and a very reliable content delivery. Not only that, it also
helps in accelerating the delivery time or in other words reducing the delivery time also called as load times. It also
helps in saving bandwidth and increases in responsiveness to the application. Let's expand on this. The content
delivery network is actually a distributed network of servers that can efficiently deliver web content to
users. Now, CDN's we're going to use the word CDN here. CDNs store cacheed content on global edge servers also
called as uh pops point of presence locations that are very close to the end users. So, the latency is minimized.
It's like taking a copy of the data or taking a multiple copy of the data and storing it in different parts of the
world and whoever is requesting it the data gets delivered to them from a server which is very locally to them. So
this CDN offers developers a global solution for rapidly delivering high bandwidth content to users by caching
the content in a strategically placed location which is very near to them. So these content delivery networks it
really helps in handling that's one advantage you get for content delivery network that's we can handle spikes and
heavy loads very efficiently and we can also run analytic against the logs that gets generated in content delivery
network which helps in gaining good insight on the workflow and what would be the future business need for that
application and this just like a lot of other services. This is on a pay as you go type. So you use the resource first
and then you only pay for what you have used. The next one in networking would be express route. Now express route is
actually a circute or a link that provides an a direct private connection to Azure and because it's direct it
gives low latency link to Azure. It gives good speed and reliability for the Azure data transfer. It could be on
premises to Azure. So it gives very good speed. It gives increased reliability and low latency for that connection.
Let's expand on this a bit. And now this express route is an service that actually provides an private connection
between Microsoft data center and infrastructure in our premises or in a different collocation facility that we
might have. Now these express routes uh do not go over the public internet and because they don't go over the public
internet they offer a high security reliability and speed and low latency compared to the connections which are in
the internet because it's fast because it's reliable because it it has low latency it can be used as an extension
of our existing data center. You know users are not going to feel the difference whether they are accessing
services from an on- premises or in the cloud environment because latency is minimized as much as possible. Users are
really not going to see the difference. And because it's a private line and not an public internet line, it can be used
to build hybrid applications without compromising privacy or the performance. Now these virtual private cloud these
express routes can be used for taking backups. If assume a backup going through the internet that would be a
nightmare. If you use express route for backups that's going to be fast and imagine recovering a data through the
internet from the cloud through the internet to the on- premises in a time of disaster. That would be the worst
nightmare. So these express routes can be used not only to backup but also to recover the data because it provides
good speed low latency. Recovering the data is going to be lot sooner. The next product or service we're going to
discuss in networking is Azure DNS. Now Azure DNS allows us to host domain name in Azure and these domain names come
with an exceptional performance and availability. Now, Azure DNS is used to set up and manage DNS zones and records
for our domain name in the cloud. Now, this Azure DNS is a service for DNS just like the name says and it provides name
resolution by using Azure's infrastructure and uh by using this domain, we can actually manage the DNS
ourselves through the Azure portal with the same credential. Imagine having a DNS provider which does not even belong
in our IT. Imagine that environment. You know, we would have a separate portal to manage the DNS environment. Now those
are gone and now we can actually manage the DNS in the very same Azure portal where we use the rest of the other
services. And this Azure DNS very much integrates with other DNS service providers. It uses a global network of
name servers to provide fast response to DNS queries. And these domains are having additional availability compared
to the other uh domain service providers availability promises. These are going to have more availability than the rest
because most of the servers are maintained by Microsoft and it helps resolve sooner. It helps reyncing let's
say a server fails. It kind of helps reyncing with the rest of the servers. So all the Microsoft's environment, all
the Microsoft's global network of name servers kind of ensures that our domain names are resolved properly. Not only
properly but also are available most of the time. Right. Next in the list in networking services is virtual network.
I'm sure this is going to be very interesting and I'm sure you're going to like it. So this networking or virtual
networking in Azure, it actually allows us to set up our own private cloud in the public cloud. It gives us an
isolated and highly secure environment for our application. Let's expand on this. So this Azure virtual network
helps us to provision Azure virtual machines and uh it helps us to securely communicate with other onremises and
internet networks. It also helps in controlling the traffic that flows through or flows in and out of this
virtual network to other virtual networks and to the internet. Now this Azure virtual network sometimes called
as VNET is actually a representation of our own network in the cloud. It's actually a logical isolation of the
Azure cloud dedicated to our subscription. All our environments are provisioned in a VNET that is separate
from another customer's VNET. That way we have that logical separation there. So this virtual network can also be used
to provision VPNs in the cloud. So we can connect the uh cloud and the on premises uh infrastructure and lot more
especially in a environment where we have hybrid environment surely we will be using virtual network because that's
going to require a VPN for secure data transfer in and out of the cloud and in and out of the on premises environment.
All right. So it kind of gives us an boundary for all the resources. So all the traffic between the Azure resources
they kind of logically stay in between or logically stay within the Azure virtual network. And here we can design
the network. It's given over to us. You know you can pick the IP, you can pick the routing, you can pick the subnet.
You know, lot of freedom is given or I would say a lot of control on how the network is designed. It's not like
something that's already cooked and we only get to use it. No, we can actually build the network from the scratch. We
can pick the IP address that we like. We can pick, you know, which subnet needs to communicate with the other subnet,
stuff like that. And like I said, if you are using hybrid environment, you definitely would be requiring a virtual
network because it helps connect the on premises and the cloud in a secure fashion using VPN. The last product
we're going to discuss in networking is a load balancer. This load balancer actually provides application a good
availability and a good network performance. So how does it work? It actually works by load balancing the
traffic to and from uh the virtual machine and the cloud resources. Not only that, it also load balances between
uh cloud and cross premises virtual networks. With Azure load balancer, we can actually scale our application and
create high availability for our services, which means our application will be available most of the time. If
any of the server goes dead, the server does not get traffic. What happens if the server gets traffic? User is going
to experience downtime. What happens if the server does not get traffic? User won't experience any downtime. The
connection is shifted to an healthy service. So the user experiences uptime all the time. So this load balancer
supports inbound and outbound scenarios and it provides low latency. It gives high throughput of the data transfer and
we can actually scale up the flow of the TCP and UDP connections from hundreds to thousands to even millions because we
have a load balancer now in between the user and the application. So how does it operate? This load balancer actually
receives the traffic and it load balances the traffic to the backend pool of instances connected to it according
to the rule and the help probe that we set. That's how it maintains high availability. So what does load balancer
help? It helps in creation of high available scalable application in the cloud in minutes. It can be used to
automatically scale the environment with the increasing application traffic. And one feature of load balancer is to check
the health of the user's application instance and it removes or it stops sending the request to the unhealthy
instance and kind of shifts that connection to the healthy instance. That way a user or a connection does not get
stuck with an instance that's not healthy. That's all that you need to know about the networking services. Now
let's talk about the storage services or the storage domain in Azure. Now Azure storage in general is a a Microsoft
manage service providing cloud storage which basically is highly available, secure, durable, scalable and redundant
because it's all managed by Azure. We don't get to manage a lot of it. And these Azure stoages are a group of
storage services. They cater different needs and the storage products include Azure blobs which is actually an object
storage. It includes um Azure data lake. It includes Azure files as you see it it includes Azure cues. It includes Azure
tables and lot more. But let's start our discussion with Azure store simple. Azure Store simple is an hybrid cloud
storage solution that actually lowers the cost of storage to nearly 60% of how much you would be actually spending
without using it. So, Azure Simple Storage or Store simple is an integrated storage solution that manages the
storage task between on premises and the cloud storage. What I really like about Azure is that it's built around a hybrid
environment in mind. There are a lot of other cloud providers that are there where running an hybrid environment is a
big challenge. You know, it has some compatibility. You won't be able to find an hybrid or a on premises and cloud
solution for your need stuff like that. But with Azure, especially when it comes to storage, a lot of the things that
we're going to see, it clearly is designed with hybrid environment in mind. All right. So, let's come back and
talk about store simple. So, store simple is an very efficient, cost-effective and a very easily
manageable SAN storage area networking solution in the cloud. I thought I'll throw in this information. The reason
why it got store simple is really because it uses store simple 8000 series devices which are used in Azure data
center and this uh store simple or simple storage it comes along with storage tearing to manage uh the stored
data across the various storage media. So the current the very current data is actually stored in on premises on solid
state drives and data that is used less frequently is stored in uh HDDs or hard disk drives and the data that requires
archived or that needs to be archived very old data let's say less frequently used data candidate for archived they
are actually pushed uh to the cloud. So you see how this storage sharing automatically happens in store simple.
And one another cool feature of store simple is that it enables us to create an ondemand and scheduled backups of
data and and then store the data locally or in the cloud. And these backups are actually taken in the form of
incremental snapshot which means that they can be created and restored quickly. It's not a complete backup.
It's an incremental backup. And these cloud snapshots they can be critically important when there is a disaster and
when there is a disaster recovery scenario because these snapshots can be called in and they can be put on storage
systems and then they become the actual data. So recovering is faster if you have proper scheduled backups or if you
have frequent backups. And this storage simple it really helps in easing our backup mechanism which means it kind of
eases our disaster recovery steps or procedures as well. So the store simple it can be used to automate data
management, data migration, data movement, data taring across the enterprise both in cloud and on
premises. It actually improves the compliance and accelerates the disaster recovery for our environment. And if
there is one thing that's increasing every new day in our environment, that would be storage. And this store simple
addresses that need. And we really don't have to pre-plan or or think indeed for having a proper storage because now we
have a simple storage available in the cloud. And moreover, it's on a pay as you go type. So not much pre-planning on
storage is needed. Yes, there would be a need but not as much as I would without the cloud or without the simple storage.
And the next service under storage that we would like to discuss is the data lake store. This data lake store or
storage it's a cost effective solution for big data analytics in specific. So let's expand this. So this data lake
storage is an enterprisewide repository for big data analytic workload. Now that's the major service that's
dependent on this data lake store. And this data lake enables us to capture data of any size of any type and of any
injection speed and it kind of collects them in one single space or in one single place for operational efficiency.
I mean operational efficiency and for analytic purpose. Hadoop in Azure is very dependent on this data lake storage
and this uh data lake store is designed with performance for analytics in mind. So anytime you think of or anytime
you're using analytic in the cloud or anytime you're using Hadoop in the cloud in Azure we are definitely using or we
will be to the most part or or the normal procedure or the right storage to pick would be data lake store in Azure.
It's designed with security in mind. So anytime we use Azure storage we can be rest assured that we are using storage
from within a data center which has or which was built with security in mind. So this data store also uses Azure blob
storage behind the scenes for global scale durability and for performance. Let's talk about blob storage. Now blob
storage provides large amount of storage and scalability. Now this blob storage is the object storage solution for Azure
cloud. Let's expand a bit on blob storage. Azure blob storage is Microsoft offering for object storage. Now this
blob storage is optimized for storing massive amount of unstructured data which could be text or binary data. It's
designed and it's optimized for rapid reads. If I explain to you on what scenarios we would be using blob storage
that might help you get a good understanding of what blob storage is. So it's help or its design as of now
it's being used in many IT environments to serve images or documents directly to the browser. It helps in storing files
for distributed access. A lot of fetchers can fetch data from Azure blob storage and it currently helping users
stream video and audio. It's currently being used for writing log files. It's currently being used to store data as
backup and restore at a later point in times of disaster recovery. It also is used as an archiving storage in lot of
cloud IT environments. It's widely used in storing analytic data. Not only storing but also running analytic query
against the data stored in it. So that's a wide use case for blob storage. Not only that, in addition to all that we
mentioned, uh it also supports versioning. So anytime somebody updates an data, a new version gets created,
which means at any point I can roll back as and when needed. And it provides a lot of flexibility on optimizing the
users's storage need. It also supports uh taring of the data. So based on need when I actually explore I would find a
lot of options I can pick from that uh you know suits to my unique storage environment or unique storage need and
like I said it stores unstructured data and this unstructured data is available for customers through restbased object
storage environment. The next product in storage service would be Q storage. Now Q storage provides durable cues for
large volume cloud services. It's a very simple and a cost-effective durable messaging queue for large workloads.
Let's expand this Q storage for a moment. Now this Q storage is a service for storing large amount of messages
that can be accessed from anywhere in the world through HTTP and HTTPS calls. A single Q or a single cube message can
be up to like 24 KB in size. And a single queue can contain millions of such 24 KB in size messages. And how
much can it hold? It can hold up to the total capacity of the storage account itself. So that's kind of easy to
translate how much would it hold. And this Azure Q storage, it provides an messaging solution between applications
and components in the cloud. What does it help? It helps in designing an application for scale. It helps in
decoupling the application. So you know it's not very dependent or sometimes it's not at all dependent on the other
application because now we have a queue in between which kind of translates or which kind of connects or which kind of
decouples both the environment. Now we have a queue in between both the environment can scale up or scale down
independently. The next in the storage service would be file storage. Let's talk about file storage. Now these Azure
files provide secure, simple and managed cloud file shares. Now with fileshare in the cloud, it actually extends the user
servers on premises performance and capacity and lot of familiar tools for the cloud fileshare management can be
used along with the file storage that we're talking about. So let's expand a bit on file storage. Now this Azure
files or Azure file storage offers a fully managed file shares in the cloud that can be accessed via the uh SMB
protocol server message block protocol. Now this Azure file shares can be mounted concurrently by cloud or in on
premises deployments. Lot of operating systems are compatible with it. Windows are compatible, Linux is compatible, Mac
OS is compatible. In in addition to all these being able to run on on premises and on the cloud or being able to access
from on premises and on the cloud, it can also offer cache for caching uh the data and keeping it locally. So it's
immediately available when needed. So that's some additional feature I would say that's some advanced feature that it
offers compared to the other file shares available in the market. Let's talk about table storage. Let's talk about
table storage. Now table storage is a nosql key value pair storage for quick deployments with large semi-structured
data sets. The difference between one important thing to note with table storage is that it has a flexible data
schema and also it's highly available. Let's expand a bit on table storage. So anytime you want to pick a schemalless a
nosql type table storage is the one we'll end up picking. It provides an key pair attribute storage with a
schemalless design. This table storage is very fast and very cost effective for many of the applications and for the
same amount of uh data. It's a lot cheaper when you compare it with the traditional SQL data or data storage. So
some of the things that we can store in the table storage are of course they're going to be flexible data sheets uh such
as uh user data for web application address books device information and other types of metadata for our service
requirements and it can have any number of tables up to the capacity limit of the storage account. Now this is not
possible with SQL. This is only possible with NoSQL especially with table storage in Azure. explanation of storage really
concluded the length and breadth of the explanation this CEO was giving his uh IT personal but this IT personal is not
done with it yet. He still has a question even after this lengthy discussion and his question was well
there are a lot of other cloud providers available. What made you specifically choose Azure? I mean from the kind of
question that he asked we can say that he is very curious and uh he definitely had asked an thoughtful question. So his
CEO went on and started to explain about the uh other capabilities of Azure or how it kind of outruns the rest of the
cloud providers. So he started or uh he again started his discussion but from a different angle now. So he started to
explain what are the capabilities or how Azure is better than uh the competitors. So he started with explaining the
platform as a service capabilities and I'm going to tell you what the CEO told his ID person. So this platform as a
service or in platform as a service the infrastructure management is completely taken care by uh Microsoft allowing
users to focus completely on the innovation. No more infrastructure management responsibilities. Go and
focus on innovation. That's that's a fancy way of saying it. When we buy platform as a service, that's what we
get. We can contribute our time on innovation and not just maintaining the infrastructure. And u Azure especially
is u net friendly. Azure supports the net programming language and um it has or it is built or designed or it is
optimized to work with old and new applications deployed using net programming framework. So if your
application isnet most of the time you would end up picking Azure I mean if you try to
compare most of the time you would end up picking Azure as your cloud service provider and the security offerings that
Azure offers is it's designed based on the security development uh life cycle which is an industry-leading assurance
process. When we buy services from Azure, it assures that uh the environment is designed based on
security development life cycle. And like I mentioned many times in the past and I would like to mention it again,
Azure has well thought about the hybrid environments which a lot of other cloud providers have failed. So it's very easy
to set up an hybrid environment to migrate the data or not to migrate the data and still run a hybrid environment.
They work seamlessly with the Azure because Azure provides seamless connection across on premises data
centers and the public cloud. It also has a very gentle learning curve. If you look at the uh documentation, it's
picture and the documentations are neat and clear. Would really it would encourage you to learn more. It would
encourage you to think and imagine and try easily get a grasp of how services work. So it has a very gentle learning
curve. Azure allows the utilization of technologies that several business have used for years. So there is a big
history behind it. It has a very gentle learning curve. the the certifications, the documentations, the stage bystage
certification levels. It's all very gentle learning curve which is generally missing in other cloud service
providers. Now this would really impress the CTOs or or people working in finance and budgeting. If an organization is
already using Microsoft software, they can definitely go and avail or be bold and ask for a discount that can reduce
the overall Azure spending. In other words, overall pricing of the Azure. So that's what helped or they are the
information that helped the CEO pick Azure as his cloud service provider. And then the CEO goes on and talks about the
different companies that are currently using Azure and they are definitely using Azure for a reason like Pixar,
Boeing, Samsung, EasyJet, Xerox, BMW, 3M they are major multinational, multi-billion companies. They rely, run,
operate their IT in Azure. And this CEO has a thought that his IT person is still not very convinced unless and
until he shows him a visual of how easy things are in Azure. So he goes on and explains about a practical application
of Azure, which is what exactly I'm going to show you as well. All right, a quick project on building an Azure app
using or building a net application in Azure web app and making it connect to an SQL database will solidify all the
knowledge that we have gained so far. So this is what we're going to do. I have an Azure account open as you see logged
in and everything is fresh here. Let me go to resource group. There's nothing in there. It's it's kind of fresh, right?
And I'm logged in and this is what we're going to do. So we're going to create an application like this which is nothing
but an todo application a to-do list application which is going to run from the web app get information from us and
save it in the database that's connected to it. So you can already see it's a two-tier application web and DB. All
right. So let me go back to my Azure account. The first thing is to create an resource group. Let's give it an a
meaningful name. Let's call it Azure Simply Learn. All right. And it's going to be a free trial. And the location,
pick one that's nearest to you or, you know, wherever you want to launch your application. Now, for this use case, I'm
going to pick Central US and create. It's going to take a while to get created. There you go. It's created.
It's called Azure Simply Learn. Now, what do we need? We need an web app and an a separate SQL database. Let's first
get our web app running. So, go to app services and then click on add. It's not the web app plus SQL that we want. We
want web app alone for this u example. So, let's create an web app. Uh give it a quick name. Let's call it um Azure
Simply Learn. The subscription is free trial and I'm going to use my existing resource group. A resource group that we
created some time back. It's going to run out of windows and we're going to publish uh the code. All set. We can
create it. All right. While this is running, uh let me create my uh database. Right? SQL database. Create a
database. Give it a name. Let's call it Azure SimplyLearn DB. Put it in our existing resource
group that we created. It's going to be a blank database. All right. And it's going to require some uh settings like
the name of the server and the admin login, the password that goes along and in which location this is going to be
created. The server name is going to be Azure SimplyLearn DB. That's the server name. And the admin login can be what
can be the admin login name. Let's see. So let's call it simply learn. That's my admin login name. And let me pick a
password. Click on create. So what have we done so far? We have created an web app and we have created an uh a database
in the resource group that we have created. So if I go to resource group, it's going to take some time before
things show up. So if I go to my resource group, I only have one resource group as of now. Azure simply learn. And
there I have a bunch of resources being created and it's still being created. Right? In the meantime, I
have my application right here that's running out of uh or that's in Visual Studio as of now. Right. So once the
infrastructure is set and ready in the Azure console, uh we're going to go back to Visual Studio, feed these inputs in
the Visual Studio. So the code knows what the database is, the the credentials to log to the database,
stuff like that. So we're going to feed those information in Visual Studio. By that we're actually feeding it into the
application and then we're going to run it from there. Deploying this application takes uh quite a while. We
really got to be patient. All right. Now we have all the resources that we need for the application to run. Here is my
uh database and here is my app service. There's one more thing we need to do that is um create an firewall exception
rule. So one more thing needed is to create an firewall exception uh rule. Right? So the application is going to
run from my local desktop and it's going to connect to the uh uh database, right? So let's add an exception rule by simply
adding the client IP. It's going to pick my IP, the IP of laptop I'm using as of now and it's going to create an
exception to access the database. So that's done. Now we can go back to our Visual Studio. I already have a couple
of um apps running or a couple of uh configurations pushed from uh Visual Studio. I'm going to clean that up. If
you're doing it for the first time, you you may not uh need to do this. All right. So, let's start from
the scratch. This is very similar to how you would be doing in your environment. All right. So we're going to uh select
an existing Azure app service. Now before that I have logged in as you can see I have logged in with my credential.
So it's going to pull few things automatically from my Azure account. So in this case I'm going to use an
existing Azure app. So select existing and then click on publish. All right. If you recall, these
are the very same resources that we created a while back. All right, we have clicked on save
and it's uh running kind of validating the code and it's going to come up with an URL. Now, initially the URL is uh not
going to work because we haven't mapped the application to the database. So, that would be the next thing.
All right. So, the app has been published and it's running from my uh web app. As of now, it's going to throw
an error. Like you see, it's throwing an error. That's because we haven't mapped the app and the DB together. So let's do
that. All right, let's do that. So let's go to server explorer. Uh this is where uh we're going to see our uh uh
databases that uh we have created. Now let's quickly verify that. Go back to uh the resource group, right? Appropriate
resource group which is right here. And uh here I have my uh database Azure SimplyLearn
database. All right. It has some issues connecting uh to my uh database. Give me a quick moment. Let's fix it.
All right. So we'll have to map the database into this application. All right. So let's go to the solution
explorer. Click on publish and a page like this get shown. And from here uh we can go to
configure. Here is our web app. All right. With all its uh credentials. Let's validate the connection number
one. All right. And then click on next. This is my DB connection string, right? Which the app is going to use to
connect to my DB. Now, if you recall, RDB was uh Azure uh simply learn DB and that's not being shown here. So, let's
fix that, right? So, let's fix that. Click on configure and here uh
let's put our uh DB servers uh URL. Now before that let's change this to SQL server. All right. And then in here uh
put the DB's URL. So go back to Azure. Here is my DB or server's name. Put that here. Right. The username to connect to
the server. That's right here. Put that in. And the password to connect to the server. Let's
put that in. All right. It's trying to connect to our Azure portal or the Azure
infrastructure. And here is my database. If you recall, it's Azure SLDB. That's the name of the
database. Let's test the connection. Connection is good. Click on okay. So now it's showing up correctly.
Azure simply learn DB. That's the name of uh the database that we created. Now it's
configured. All right, let's modify the data connections. Right, let's map it to the appropriate
database again. All right, so our name of the database is Azure simply learn DB and then uh it's going to be SQL
server that's the data source. The uh username is simply learn and the password
is what we have given in the beginning. All right, let's validate the connection. It's good. Click
okay. Now we're all set and ready to publish our application again. Now the application knows how to connect uh to
the database. We have educated it with the u the correct connection strings the DNS name the username and the password
for the application to connect to the database. So, Visual Studio is building this project and once it is up and
running, we'll be prompted with an URL uh to connect and anytime we put or we give inputs to the URL that's going to
receive the input and save it in the database. All right. So, here is my uh to-do list
app and uh I can start uh creating to-do list for myself. All right. So, I have the items already listed. U I can create
an entry and these entries get stored in the u in the database. I can create another entry and I'll take the dog for
a walk that's going to get stored. I can create another entry uh book tickets for uh scientific uh
exhibition and that's going to receive and put that in the database. And that concludes our session. So through this
session we saw how I can use Azure services to create web app and connect that to the DB instance and how those
two services which are decoupled by default which are separate by default how I can you know use the connection
strings to make connection between the app server and the database and be able to create an working app. So let us
first understand what exactly is cloud security. Cloud security is a set of tools and practices designed to protect
businesses from threats both inside and outside the organization. As companies move to use more online tools and
services, ensuring their cloud security is essential. Let's start with a simple case study. Imagine an IT consulting
firm. The owner Alex decides to move the company's operation to the cloud to streamline project management, client
interactions, and data storage. Alex chooses a cloud service provider to host all the data and application. But here's
the thing, the internet can be a dangerous place. Just like you would want to protect your home from burglars,
Alex needs to protect his business from cyber criminals who might try to steal sensitive client data or disrupt
services. This is where cloud security comes in. Cloud security involves using various tools and practices to keep your
data safe. Ensure only authorized people can access it and protect it from the potential threats. For Alex, this means
making sure the client's information is encrypted so even if someone intercepts it, they can't read it. Setting up
strong passwords and multiffactor authentication to make sure only the right people have access and regularly
updating software to fix any security weaknesses. So in simple terms, cloud security is like having a high-tech
security system for your online and data applications. It's about making sure your valuable information is safe and
sound just like you would protect your most prized possessions. But why is cloud security so important? Lately,
terms like digital transformation and cloud migration are often heard in businesses. These terms means different
things to different companies. But then they both highlight the need to change. As businesses adopt these new
technologies to improve their operations, they face new challenges in keeping everything secure while
maintaining productivity. Modern technologies allows businesses to work beyond traditional office setups. But
switching to cloud-based systems can be risky if not done correctly. To get the best results, businesses needs to
understand how to use these technologies safely and effectively. This means finding the right balance between using
advanced cloud tools and ensuring strong security practices are in the place. Before we jump in, let's talk briefly
why the cloud is such a gamecher. Cloud computing offers ondemand selfservice, broad network access, rapid elasticity,
and scalable resources, making it incredibly convenient for businesses. You can access resources from anywhere,
scale up or down based on your needs, and enjoy a flexible and efficient computing environment. However, these
benefits can introduce unique security challenges that needs to be addressed. Data breaches. Encryption is your best
friend. Make sure that your data is encrypted both at rest and transit. Think of it like this. If someone steals
a lock box but doesn't have the key, they can't access what's inside. Access controls. Only authorized users can
access sensitive data. Use role based access control RBAC and multiffactor authentication MFA to make this happen.
It's like having a bouncer at a VIP party. Only the right people get in. Then we have insecure APIs. Secure
coding practices. Always use secure coding practices to protect your APIs. Validate inputs and use proper
authentication. It's like making sure your doors and windows are locked tight. Regular audits. Regularly audit your
APIs for vulnerabilities. Think of it as a routine checkup to keep everything is on top shape. Then we have insider
threats. To overcome this, we have monitoring activities. Keep an eye on what's happening within your
organization. Use monitoring tools to detect unusual activities. It's like having security cameras inside your
premises. Employ training. Educate your employees about security's best practices. Sometimes a simple mistake
can lead to big problems. Training is the first line of defense. The next threat we have is the denial of service
attacks. Traffic management. Use traffic management tools to filter out malicious traffic. It's like having a floodgate to
control the flow of water and keep out the bad stuff. Redundancy and load balancing. Implement residency and load
balancing to ensure your system can handle sudden spikes in traffic. It's like having multiple lanes on a highway
to prevent traffic jams. Then we have advanced persistent threats, APS. To overcome this, we should ensure
continuous monitoring. Always monitor your system for any signs of intrusion. Use advanced security measures to detect
and respond to threats quickly. Think of it as having a night watchman on duty 24 into 7. Then we have incident response
plans. Having a plan in place to respond to security incidents. It's like having an emergency drill. You'll know exactly
what to do if something goes wrong. Best practices. Here are some of the best practices to enhance cloud security. At
first, implement strong access controls. Use multiffactor authentication and limit access based on rules to ensure
that users only have the permission they need. Regularly update and patch systems. Keep your systems up to date
with the latest patches to close vulnerabilities that could be exploited by attackers. Then we have employee
training. Educate your employees on security's best practices such as recognizing fishing attempts and
creating strong passwords. Then we have network segmentation. Divide your network into smaller, more secure zones
to limit the impact of potential breaches. Monitoring and logging. Continuously monitor your environment
and maintain detailed logs to detect and respond to threats promptly. Cloud security is an ongoing effort that
requires vigilance, right tools, and a commitment to best practices. By staying informed, using the right security
measures, and following best practices, you can protect your cloud environment from threats. Let's now take a look at
some individual features of S3. Starting off with life cycle management. So life cycle management is very interesting
because it allows us to come up with a predefined rule that will help us automate the transitioning of objects
from one storage class to another without us having to manually copy things over. Of course you could imagine
how timeconuming that would be if we had to do this manually. So we're going to see this
very soon in a lab. However, let me discuss how uh how this works. So once we uh it's basically a graphical user
interface. It's very very simple to use once you come up with these uh life cycle management rules. But you're going
to define two things. You're going to define the transition action and the expiration action. So the transition
action is going to be something like well I want to transition an object from maybe it's all objects or maybe it's
just a specific type of object in a folder example that has a specific prefix from one storage class. let's say
standard to standard inactive or infrequent access maybe only after 45 days after at least a minimum of 30 days
like we spoke of before and then maybe after 90 days you want to transition the objects in IIA to right away glacier
deep archive or 180 days you come up with whatever combination you see fit okay it doesn't have to be sequential
from S3 to IA to one zone etc etc Because like we discussed before, it depends what kind of objects that you're
interested in putting in one zone. Objects that you don't really mind losing if that one availability zone
goes down. So you're going to be deciding those rules. It ends up that this even is not a simple task because
you have to monitor your usage patterns to see which data is hot, which data is cold and what's the best kind of life
cycle management to implement to reap the benefits of the lowest cost. So you have to put somebody on this job and
make the best informed uh decisions based on your access patterns and that is something that you need to
consistently monitor. So what we can do is we can instead opt for something called S3 intelligent
taring which basically analyzes your workload using machine learning algorithms. And after about a good 30
days of analyzing your access patterns will automatically be able to transition your objects from S3 standard to S3
standard infrequent access. Okay? It doesn't go past the IIA1. doesn't go after the glacier and whatnot. Okay. So,
it can then offer you um that at a reduced um price overhead. So, there is a monitoring fee that is introduced in
order to uh implement this feature. It's a very nominal, very very low monitoring fee. And the nice thing is is if ever
you take out an object out of the infrequent access before the 30-day limit as we spoke of before, you will
not be charged um an overhead charge because of that. Why? Because you're using the
intelligent tearing. You're already paying an overhead for the monitoring fee. So at least in that sense the
intelligent tearing will take the object out of IIA and put it back into the S3 standard class if you need access to it
before the 30 days and in that case you won't be charged that overhead. So that is something that is very
um that is very um good to to to do in order not to have to put somebody on that job. So yes, you're paying a little
bit of overhead for that monitoring fee, but at the other side of the spectrum, you're not investing in somebody uh
working many hours to monitor and put into place a system to monitor your uh data access patterns. So let's take a
look at how to do this right now. Let's implement our own life cycle management rules. So let's now create a life cycle
rule inside our bucket. First off, we're going to need to go to the management tab in the bucket that we just created.
And right on the top, you see right away life cycle rule. We're going to create life cycle rule. And we're going to name
it. So, I'm just going to say something very uh simple like simply learn uh life cycle rule.
And we have the option of creating this rule for every single object in the bucket or we can limit the scope to a
certain type of file perhaps with a prefix like I could see one right now something like log. So anything that we
categorize as a log file will transition from one storage tier to the next as per our instructions. We're doing this
because we really want to save on costs, right? It's not so much of organizing what's your older data versus your newer
data. It's more about reducing that storage cost as your objects get less and less used. So in this case, logs are
a good fit because perhaps you're using your logs for the first 30 days. You're sifting through them. Um you're trying
to get insights on them, but then you kind of move them out of the way because they become old data and you don't need
them anymore. So, we're going to see how we can uh transition them to another pricing tier, another storage tier. Uh
we could also do this with object tags, which is a very powerful feature. And in the life cycle rules action, you have to
at least pick one of these options. Now, since we haven't enabled versioning yet, what I'm going to do is just select
transition the current version of the object between these storage classes. So as a reminder of what we already covered
in the slides, our storage classes are right over here. So the one that's missing is
obviously the default standard storage class, which all objects are placed in by default. So what we're going to say
is this. We want our objects that are in the default standard storage class to go to the standard inactive access storage
class after 30 days. And that'll give us a nice discount on those objects being stored. Then we want to add another
transition. And let's say we want to transition them to Glacier after 90 days. And then as a big finale, we want
to go to Glacier deep archive. You can see the rest are grayed out. Wouldn't make sense to go back. And maybe after
180 days, we want to go there. Okay. Now, there's a little bit of um a warning or call to attention here.
They're saying if you're going to store very small files um into Glacier, not a great idea.
There's an overhead in terms of metadata that's added and also there's an additional cost associated with storing
small files in Glacier. So, we're just going to acknowledge that. Of course, for the demonstration, that's fine. In
real life, you'd want to store very big tar files or zip files that had, you know, one or more lock files in there.
Okay, that would bypass that that search charge that you would get. And over here you have the timeline summary of
everything we selected up above. So we have here after 30 days the standard inactive access, after 90 days glacier,
and after 180 days glacier deep archive. So let's go and create that rule. All right. So, we see that the rule is
already enabled and at any time you could go back and disable this if ever you had um a reason to do so. We can
easily delete it as well or view the details and and edit it as well. So, if we go back to our bucket now, what I've
done is created that prefix with the /logs. Since we're not doing this from the command line, we're going to create
a logs folder over here that will fit that prefix. So, we're going to create logs, create folder, and now we're going
to upload our, let's say, Apache log files in here. So, we're going to upload one demonstration Apache log file that
I've created with just one line in there, of course, just for demonstration purposes. We're going to upload
that. And now we have we're going to just close that. And now we have our our Apache log file in there. So what's
going to happen because we have that life cycle rule in place after 30 days anything any file that has the logs
prefix where basically is placed inside this folder will be transitioned as per that life cycle role policy that we just
created. So congratulations you just created your first S3 life cycle role policy.
Let's now move over to bucket policies. So bucket policies are going to allow or deny access to not only the bucket
itself but the objects within those buckets to either specific users or other services that are inside the AWS
network. Now these policies fall under the category of IM policy. So IM stands for identity and access management and
this is a whole other topic that deals with security at large. So there are no services in AWS which are allowed to
access other services or data for example within S3 without you explicitly allowing it through these IM policies.
So one of the ways we do that is by attaching one of these policies which are written in a JSON format. So it's a
text file that we write at the end of the day. That's the artifact and that's a good thing because we can use that
artifact and we can configuration control it in our source control and version it and put it alongside our
source code. So when we deploy everything, it is part of our deployment package. So in this case here we have
several ways of doing this. We can use what's called the policy generator which is a graphical user interface that
allows us to simply click and point and populate certain text boxes which will then generate that JSON document that
will allow us to attach that to our S3 bucket and that will determine like I said which users or services have access
to uh whatever API actions are available for that resource. So we might say we want certain users to be able just to
list the contents of this bucket, not necessarily be able to delete or upload new objects into that bucket. So you can
get very fine grained permissions based on the kind of actions you want to allow on this resource. So in order to really
bring this home, let's go and perform our very own lab on this. Let's now see how to create an S3 bucket
policy. Going back to our bucket, we're now going to go into permissions. So, the whole point of
coming up with a bucket policy is that we want to control who or what, the what being other services have access to our
bucket and our objects within our bucket. So there are several ways we can go about doing this. Let's edit a bucket
policy. One, we can go and look at a whole bunch of pre-anned examples, which is a good thing to do. Two, we could
actually go in here and code the JSON document ourselves, which is much more difficult, of course. So what we're
going to do is we're going to look at a policy generator which is really a formbbased graphical user interface that
allows us to generate through the answers that we're going to give here the JSON document for us. First question
is we got to select the policy type. Of course we're dealing with S3. So it makes sense for us to create an S3
bucket policy. The two options available to us are allowing or denying access
to our S3 bucket. Now, in this case here, we could get really um fine grained and specify certain kinds of
services or certain kinds of users, but for the demonstration, we're just going to select star, which means anything or
anybody can access this S3 bucket. All right. Now depending on uh also the actions that we're going to allow. So in
this case here we can get very fine grained and we have all these check boxes that we can check off to give
access to certain kind of API action. So we can say we want to give access to you know just deleting the bucket which
obviously is something very powerful. Uh but you can get more fine grain as you can see you have more of the getters
over here. Um, and you have more of the the the listing and the putting new objects in there as well. So you can get
very fine grain. Now for demonstration purposes, we're going to say all action. So this is a very broad and wide ranging
permission. Something that you really should think twice about before doing where it's basically saying we want to
allow everybody and anything any service all API actions on this S3 bucket. So that's no uh small thing. we need to
specify the Amazon resource name, the ARN of that bucket specifically. So, what we're going to do is go back to our
bucket and you can see here uh the bucket ARN. Okay, so we're just going to copy
this, paste it in this policy generator and just say add statement. You can see here kind of a resume of what we just
did and we're going to say generate policy. And this is where it creates for us and make this a little bit bigger for
us. It creates that JSON document. So, we're going to take this, we're going to copy it, and we're going to paste it
into the generator. Okay. Now, of course, we could flip this and change this to a deny, right? which would
basically say we don't want anybody to have access or any thing any other service to have access to this S3
bucket. We could even say slashstar to also encapsulate all the objects within that bucket. So if I save this right
now you have a very ironclad S3 bucket policy which basically denies all access to uh this
bucket and the objects within. Of course, this is on the other side of the spectrum. Very, very secure. So, we
might want to, for example, host a static website through our S3 bucket. So, in this case here, allowing access
would make more sense, right? So, if I save changes, you see that we get an error here saying that we don't have
permissions to do this. And the reason for that is because it realizes that this is extremely permissive. So in
order to give access to every single object within this bucket as in the case that I was stating of a static website
being hosted on your S3 bucket, it would be much better to also at first enable that option. So I'm just going to
duplicate the tab here. And once you go back to the permissions tab, one of the first things that shows
up is this block block public access setting. Right? Right now it's completely blocked and that's what's
stopping us from saving our policy. We would have to go in here, unblock it, and save it. Right? And it's also kind
of like a double clutch feature. You have to confirm that just so you don't do that by accident. Right? So now what
you've effectively done is you've really opened up the floodgates to have public access to this bucket. It's something
that can't be accidentally done. It's kind of like having to perform these two actions before the public access can be
granted. Now, historically, this was something that AWS was
um was guilty of was making it too easy to have public access. So, now we have this double clutch. Now that this is
enabled or turned off, we can now save our changes here successfully. And you could see here that now it's
publicly accessible, which is a big red flag that perhaps this is not something that you're interested in doing. Now, if
you're hosting a public website and you want everybody just to have read access to every single object in your bucket,
yes, this is fine. However, please make sure that you um pay very close attention to this type of access flagged
over here on the console. So, congratulations. You just got introduced to your first bucket policy, a
permissive one, but at least now you know how to go through that graphical user interface through the policy
generator and create them and paste them inside your S3 bucket policy uh pane. So let's continue on with data
encryption. So any data that you place in S3 bucket can be encrypted at rest very
easily using an AES 256 encryption key. So we can have server side encryption. We could have AWS handle all the
encryption for us and the decryption will also be handled by AWS when we request our objects later on. But we
could also have clientside encryption where we the client that are uploading the object have to be responsible for
also passing over our own generated key that will eventually be used by AWS to then encrypt that object on the bucket
side. Of course, once that happens, then the key is discarded. the client key is discarded and you have to be very
mindful that since you've decided to handle your own encryption client side encryption that if ever you lose those
keys well that data is not going to be recoverable in that bucket on the AWS network. So be very careful on that
point. We can also have a very useful feature called versioning which will allow you to have a history of all the
changes of an object over time. So versioning sounds exactly how it's named. Every time you make a
modification to a file and upload that new version to S3, it will have a brand new version ID associated with that. So
over time you get a sort of stack of a history of all the file changes over time. So you can see here at the bottom
you have an ID with all these ones and then an ID with one 121212. So eventually if ever you wanted to revert
back to a previous version, you could do so by uh accessing one of those previous versions. Of course versioning is not an
option that's enabled by default. you have to go ahead and enable that yourself. It is an extremely simple
thing to do. And so there may be a situation where you already have objects within your buckets and you only then
enable versioning. Well, versioning would only apply to the new objects that would get uploaded from the point that
you enabled versioning. the objects that were there before that point will not get a specific version number attached.
In fact, they will have a sort of null marker um version number that will get attached to them. It's only after that
you modify those objects later on and upload a new version that they will get their own version
numbers. So, right now what we're going to be doing is a lab on actual uh versioning. So let's go ahead and do
that right now. In this lab, we're going to see how to enable versioning in our buckets.
Enable versioning is very easy. We're simply going to click on our bucket, go into properties, and there is going to
be a bucket versioning section. Going to click on edit and enable it. Once that's done, any new objects that
are uploaded to that S3 bucket will now benefit from being tracked by a version number. So if you upload objects with
the same file name after that, they'll each have a different version number. So you'll have version tracking, a history
of the changes for that object. Let's actually go there and upload a new file. I'll upload one called
index.html. So, we're going to simulate a situation where we've decided we're going to use an S3 bucket as the source
of our static website to deploy one. And in this index.html file, if you take a look uh right now, let's take a look at
what's in there. You can see that we have welcome to my website and we're at version two. Okay. So if I click on this
file right now and I go to versions I can clearly see version we
have a specific version ID and then we have a sort of history of what's going on here. Now I purposely enabled
versioning before and then try to delete versioning or disable versioning. But here's the thing with
versioning. You cannot disable it fully once it's enabled. You can only suspend it. Right now, suspending means that
whatever version numbers those objects had before you decided to suspend it will remain. So you can see I have an
older version here that has an older version number. And at this point here, I decided to suspend versioning. And so
what it does instead of disabling the entire history, it puts what's called a delete marker. Okay, you could always
roll back to that version if you want. Now in the demonstration when we started it together, I enabled it again. So you
can see this is actually the brand new version number as we did it together, but you don't lose the history of
previous versioning if ever you had suspended it before. So that's something to keep in mind, right? And it'll come
up in the exam where they'll ask you, can you actually disable versioning once it's enabled? And the answer is no. You
can only suspend it and your history is still maintained. Now we have that version there. And let's say I come to
this file and I want to upgrade this. I don't know. I say version three, right? And now what's going to happen is if I
click on this version, just as the current one with version two, and I open this, we should see version two, which
is fine. That's that's that's expected. If we go back to our bucket and upload that new version file that has version
three in there, the one I just modified, we should now see in that index.html HTML file, a brand new
version that was created under the versions tab. And there you go, 1458 just two minutes after. You can see here
we have a brand new version ID, right? And if I open this one, you can see version three. So now
you have a way to enable versioning very easily in your buckets. And you also have seen what happens when you want to
suspend versioning. what happens to the history of those versions files before. Just to actually go back here to the
properties uh where we enabled versioning in the first place. If I want to go back in
here and disable it, like I said, you can't disable. You can only suspend. And that's where that delete marker gets
placed, but all your previous versions retain their version ID. So, don't forget that because that will definitely
be a question on your exam if you're interested in taking the certification exam. So congratulations, you just
learned how to enable versioning. Let's move on to cross region replication or CRR as it is
known. There will be many times when you find yourself with objects in a bucket and you want to share those objects with
another bucket. Now that other bucket could be within the same account, could be within another account within the
same region or could be within a separate account in a different region. So there's varying levels of degree
there. The good thing is is all of those um combinations are available. So CRR, if we're talking about cross region
replication, is really about replicating objects across regions, something that is not enabled by default because that
will incur a replication charge because it's syncing objects across regions. Of course, you are spanning a very wide
area network in that case. So there is a search charge for that. Now, doing so is quite simple to do. But one of the
things that we have to be mindful of is to give permissions for the source bucket which has the
originals to allow for this copying of objects to the destination bucket. So if we're doing this across regions of
course we would have to come up with IM policies and we would also have to exchange credentials in terms of uh IM
user credentials in terms of account ids and and the such. Um, we're going to be doing a demonstration in the same
account in the same region, but largely this would be the same steps if we were going to go cross region. So, this is
something you might find yourself doing if you want to share data with other um entities in your company. Maybe you're a
multinational and you want to uh have all your lock files copied over to another bucket in another region for
another team to analyze to extract business insights from. Or it might just be that you want to aggregate data in a
separate data lake in an S3 bucket in another region or in like I said it could be even in the same region or in
the same account. So it's all about organizing moving data around across objects across these boundaries and
let's actually go through a demonstration and see how we can do CRR. Let's now see how we can perform
cross region replication. We're going to take all the new objects that are going to be uploaded in the SimplyLearn S3
demo bucket and we're going to replicate them into a destination bucket. So, what we're first going to do is create a new
bucket. Okay? And we'll just tack on the number two here. And this will be our destination bucket where all those
objects will be replicated to. We're going to demonstrate this within the same account, but it's the exact same
steps when doing this across regions. One of the requirements when performing cross region replication is to enable
versioning. So if you don't do this, you can do it at a later time, but it is necessary to enable it at some point in
time before coming up with a cross region replication rule. All right, so let me create that bucket. And now after
the bucket is created, I want to go to the source bucket and I want to configure under the managements tab here
a replication rule. So I'm going to create a replication rule call it simply
learn rep rule and I'm going to enable this right off the bat. The source bucket of course
is the SimplyLearn S3 demo. We could apply this to all objects in the bucket or perform a filter. Once again, let's
keep it simple this time and apply to all objects in the bucket. Of course, caveat here. This will only now apply to
any new objects that are uploaded into this source bucket and not the ones that are already pre-existing there. Okay.
Now in terms of the a destination bucket, we want to select the one we just created. So we can choose a bucket
in this account or if we really want to go cross region or another account in another region, we could specify this
and put in the account ID and the bucket name in that other account. So we're going to stay in the same account. We're
going to browse and select the newly created bucket. And we're also going to need permissions
for the source bucket to dump those objects into the destination bucket. So we can either create the role ahead of
time or we can ask this user interface to create a new role for us. So we'll opt for that and we'll skip these
additional uh features over here that we're not going to talk about in this
demonstration. We're just going to save this. So that will create our replication rule that is automatically
enabled for us right now. So let's take a look at the overview here. You can see it's been
enabled. Just to double check, the destination bucket is the demo 2. We're talking about the same region.
And again here we could opt for additional um parameters like different storage classes in the destination
bucket that that object is going to be deposited in etc etc. For now we just created a simple rule. Now if we go back
to the original source bucket which we're in right now and we upload a new file which will be transactions file in
a CSV format. Once this is uploaded that cross region replication
rule will kick in and will eventually right it's not immediate but we'll eventually copy the file inside the demo
2 bucket. Now I know it's not there already. So what I'm going to do is pause the video and come back in 2
minutes and when I click on this the file should be in there. Okay. So let's now double check
and make sure that object has been replicated. And there it is been replicated as per our rule. So
congratulations, you just learned how to perform your first same account S3 bucket region replication
rule. Let's now take a look at transfer acceleration. So transfer acceleration is all about
giving your end users the best possible experience when they're accessing information in your bucket. So you want
to give them the lowest latency possible. You can imagine if you were serving a website
uh and you wanted people to have the lowest latency possible. Of course that's something that's very desirable.
So in terms of traversing long distances, if you have your bucket that is in, for example, the uh US East one
region in the United States in the Virginia region and you had users let's say in London that want to access those
objects, of course they would have to traverse a longer distance than users that were based in the United States.
And so if you wanted to bring those objects closer to them in terms of latency, then we could take advantage of
what's called the Amazon CloudFront delivery network, the CDN network, which extends the AWS backbone by providing
what's called edge locations. So edge locations are really data centers that are placed in major city centers where
our end users mostly are located, more densely populated areas. and your objects will be cached in those
locations. So if we go back to the example of your end users being in London, well they would be accessing a
cached copy of those objects that were stored in the original bucket in the for example US East1 region. Of course you
will get most likely a dramatic performance increase by enabling transfer acceleration. It's very simple
to uh enable this. Just bear in mind that when you do so that you will incur a charge for using this feature. The
best thing to do is to show you how to go ahead and do this. So let's do that right
now. Let's now take a look at how to enable transfer acceleration on our SimplyLearn S3 demo bucket. By simply
going to the properties tab, we can scroll down and look for a heading called transfer acceleration over here
and very simply just enable it. So what does this do? This allows us to take advantage of
what's called the content delivery network, the CDN, which extends the AWS network backbone.
The CDN network is strategically placed into more densely populated areas, for example, major city centers. And so if
your end users are situated in these more densely populated areas, they will reap the benefits of having transfer
acceleration enabled because the latency that they will experience will be severely decreased. So their performance
is going to be enhanced. If we take a look at the speed comparison page for transfer acceleration, we can see that
once the page is finished loading, it's going to do a comparison. It's going to perform, first of all, what's called a
multi-art upload. And it's going to see how fast that upload was done with or
without transfer acceleration enabled. Now, this is relative to where I am running this test. So right now I'm
actually running it from Europe. So you can see that I'm getting very very good results if I would enable transfer
acceleration and my users were based in Virginia. So of course now I have varying
uh differences in percentage as I go closer or further away from my region where my bucket or my browser is being
um is being referenced. So you can see here United States I'm getting pretty good uh percentages. As I go closer to
Europe it gets lower of course but still very very good. Frankfurt again this is about as probably at the worst I'm going
to be getting here since I'm situated in Europe. And of course as I go look more towards you know the Asian regions you
can see once again it kind of scales up in terms of better performance. So, of course, this is an optional feature once
you enable it as I just showed over here. Um, this is a feature that you pay additionally for. So, bear that in mind.
Make sure that you take a look at the pricing page in order to figure out how much this is going to cost you. So, that
is it. Congratulations. You just learned how to simply enable transfer acceleration to lower the latency from
the end user's point of view. We're now ready to wrap things up in our conclusion and go over at a very
high level what we just spoke about. So we talked about what S3 is, which is a core service, one of the original
services published by AWS in order for us to have unlimited object storage in a secure, scalable, and durable fashion.
We took a look at other aspects of S3 in terms of the benefits. We mainly focused on the cost savings that we can attain
in S3 by looking at different storage classes. Now, of course, S3 is industry recognized as one of the cheapest object
storage um services out there that has the most features available. We saw what goes into the object storage in terms of
creating first our buckets which are our containers highle containers in order for us to store our objects in. Again,
objects are really an abstraction of the type of data that are in there as well as the metadata associated with those
objects. We took a look at the different storage tiers. The default being the standard
all the way till the cheapest one which is the glacier which are meant for longterm archived objects. For example,
log files that you may hold on to for a couple of years may not need to access routinely and will have the cheapest
pricing option uh by far. So we have many pricing tiers and if you want to transition from one tier to the next you
would implement a life cycle policy or use the intelligent tiering option that can do much of this for you. We took a
look at some very interesting features starting from the life cycle management policies that we just talked about all
the way to versioning cross region replication and transfer acceleration. So with this conclusion, you are now
ready to at least start working with S3. Hello everybody, my name is Kent and today we're going to be covering AWS
identity and access management also known as IM tutorial by simply learn. So these are the topics which we'll be
covering today. We'll be defining what AWS security is, the different types of security in AWS, what exactly is
identity and access management, the benefits of identity and access management, how IM
works, its components, its features, and at the end we're going to do a great demo on IM with multiffactor
authentication and users and groups. So without any further ado, let's get started. So let's get started with what
is AWS security. Your organization may span many regions or many accounts and you
may encompass hundreds if not thousands of different types of resources that need to be secured. And therefore,
you're going to need a way to secure all that sensitive data in a consistent way across all those accounts in your
organization while meeting any compliancy and confidentiality standards that have to be met. So, for example, if
you're dealing with healthcare data or credit card information or personal identification information like
addresses, this is something that needs to be thought out across your organization. So of course we have
individual services in AWS which we will explore in this video. However, in order to govern the whole process at an
organizational level, we have AWS security hub. Now AWS security hub is known as a CSPM which stands for a cloud
security posture management tool. So what does that really mean? Well, it's going to encompass, like I said, all
these tools underneath the hood in order to bring a streamlined way for you to organize and adhere to these standards
across organization. It'll identify any misconfiguration issues and complant risks by continuously monitoring your
cloud infrastructure for the gaps in your security policy enforcements. Now, why is that important? Well, these
misconfigurations can lead to unwanted data breaches and also uh data leakages. So, in order to govern this at a very
high level, like I said, we need to incorporate AWS security hub that will automate and manage all these underlying
services for us. In order to perhaps take automated action, we call that remediary actions. And you can approve
these actions manually or you can automate those actions as you see fit across your organization. So let's delve
a little bit deeper into what is AWS security and then we'll start looking individually at some of these
services. And so our organization has special needs across different projects and environments. So of course the
production development test environments all are going to have their own unique needs. However, it doesn't matter which
project or which business unit we're talking about and what are their storage backups needs. They should all be
implemented in a concise standardized way across your organization to meet that compliancy we were just talking
about. So, we want to automate all these tedious manual tasks that we've been doing like ensuring that perhaps our
buckets in S3 are all encrypted or all our EBS volumes are encrypted and we want to have an automated process in
place in order to give us time on other aspects of our business perhaps our application
development to bring better business value and that allow us to grow and innovate the company and in a way that's
best suited for it. So, let's take a look at the different types of AWS security. Now, there are many different
types of uh security services out there. However, we're going to concentrate primarily on IM in this video tutorial.
So, I like to think of IM as the glue between all AWS services because by default, we don't have permission for
any one service to communicate with another. So, let's just take for example an EC2 instance that wants to retrieve
an object from S3. Well, those two services could not interact unless we interacted with IM. So, something that's
extremely important to learn and by the end of this video tutorial, you'll be able to understand what IM is. We have
Amazon Guard Duty here, which is all about logs. Basically aggregates all logs from example cloud trail which we
still haven't talked about. It's at the end of this list over here. But instead of looking at cloud trail um
individually, guard duty will actually take a look at trail um cloud trail will take a look at your VPC flow logs and
we'll take a look at your DNS logs and monitor that account and your network and your data access via machine
learning algorithms and it'll id threat where um you can automatically remediate the workflow that you've approved. So
for example, if you know of a malicious IP address that's making API calls, well those API calls are registered in cloud
trail, right? So this machine learning algorithm will be able to detect that and and take action. So this is really
governed at a higher level than you having to, you know, individually inspect all your DNS logs, flow logs,
and cloud trails individually and take action through some scripting fashion that that you've come up with. So guard
duty manages all that. We have Amazon Macy which once again uses machine learning but also uses pattern matching.
Pattern matching can be used with um matching uh kind of libraries that are already in place or you can come up with
your own pattern matching and that's used to discover and protect your sensitive data. So for example, if you
had healthcare data, also known as HIPPA compliancy or credit card data that's lying around, well, Macy will discover
it and will protect it. So as your data grows across your organization, that might be harder and harder to do in your
own way. So Macy really facilitates uh discovering and protecting that. So you can concentrate on other things. AWS
config is something that kind of works in tandem with all the other uh services. It's uh able to continuously
monitor your resources configurations and ensure that they match your desired configuration. So for example um maybe
you state that your S3 bucket should be encrypted by default all of them or you want to make sure that your IM user
access keys are rotated and if they're not then take remediary action. So a lot of these automated remediary actions
right are basically executed via AWS config. So you'll see AWS config actually used by other services
underneath the hood. And then you have cloud trail. Cloud trail is all about logging every single type of API call
that's made. So it'll um it'll always record the API call who made it from what source IP um the parameters that
were sent with the API. um the response all that data which is really a gold mine for you to investigate if there's
any security threat and again this trail over here and there can be many and there's lots and lots of data generated
by these trails can be analyzed by these other services uh specifically guard duty here that automates that process so
you don't have to so let's get to the next one which is what is identity
and access management. So what is IM? Like I said, IM is the glue between all AWS services. It will allow those
services to communicate with each other once you put IM into place. The other thing is is it allow us to manage our
AWS users which are known as IM users and group them together into groups to facilitate assigning and retrieving or
removing permissions from them instead of doing it on an individual scale. So of course if you had 200 IM users and
you wanted in one shot to add a permission to a group that contained 200 IM users, well you can do that in one
operation instead of 200. Now we do have a distinction between IM users and end users. End users use
applications online whereas IM users are your employees in your organization that are interacting directly with AWS
resources. So for example an EC2 instance would be a resource that let's say a developer would be interacting
with and traditionally those are resources that are used full-time from 9 to5 let's say uh 5 days a week 365 days
a year. Okay. Now, sometimes those users have to have elevated permissions for a temporary amount of time and that is
more suited for a role and that will eliminate the need to create separate accounts just for you know type of
actions that are needed for let's say an hour a month or two hours a week or something like that. Something like
backing up an EC2 instance, removing some files, doing some cleanups. Traditionally, you don't have those
permissions as a developer, but you can be given temporary permissions to assume this role. So, it's kind of like putting
a hat on your head and pretending you're somebody that you're usually not for a temporary amount of time. Once you take
off the hat, you go back to being your old boring IM user self, which doesn't have access to performing backups or
cleanups and stuff like that. So the roles interact with a service called a secure token service which gives us
specific keys three um specifically an access key a secret key much like a username and password and then we have a
token key which only gives you access to this role this elevated permission for 1 to 12 hours. So, we'll talk more about
roles as we go on through this video tutorial, but it's important that you at least remember at this point, even if
you don't know all the details, that roles have to do with temporary permissions, elevated
permissions. So, what are the benefits of IM? Well, across our organization, we
are going to have many accounts and like I said, hundreds if not thousands of resources. So scalability is going to be
an issue if we don't have a very high level uh visibility and control over the entire security process. And once we
have a tool like for example security hub which we saw on the first slide we can continuously monitor and maintain a
standard across our organization. So very very important to have that visibility and we do when we integrate
with AWS security. Now, we also need to eliminate the human factor, right? So, um we don't want to manually put out
fires every single time. Of course, there will be times when something new occurs that we've never seen before that
that you know will be uh needed. However, once an occurrence happens and reoccurs, we can obviously come up with
a plan to remediate the action. So once we automate, we can definitely reduce the time to fix that reoccurring error
or it could be a new error that we don't have to interact with because we're using machine learning algorithm
services like I was talking about like our duty or Macy and we can really reduce the risk of um security
intrusions and data leakage and such by using IM to facilitate this. Of course, you may have many compliance needs. You
may be dealing uh with applications that use uh healthcare data or credit card information for payments. Or um you
might be dealing with a government agency, let's say in the US that has certain compliancy requirements that
needs asurances that if their data is stored on the cloud that you're still following the same compliance controls
that you were let's say on premise. So we have a very very long list of compliance requirements that each AWS
service adheres to and you can go on the um AWS documentation online and you could figure out if for example Dynamob
is compliant with HIPPA uh compliancy and it is and so you you can be assured that you can use that. So AWS itself has
to constantly maintain these compliancy controls in place and they themselves have to pass all these certifications
and that gets passed on to us. So much less work for us to implement these compliance controls. We can just inherit
them by using AWS security model by IM. And last but not least, we can build the highest standards for privacy and
data security. Now having all this on our own on premise data center uh poses security challenges especially physical
security challenges. You have to physically secure the data center. You might have security personnel that you
need to hire uh 247 camera surveillance etc etc. So just by migrating to the cloud we can take advantage of AWS's
global infrastructure. They're very very good at building data centers that's part of their business and securing
them. So of course we have a shared security model in place. However, you can rest assured that at least as the
lowest common denominator uh physical security has been put into place for us. Again, as other services are used, more
managed services or more higherend services used in AWS, more and more of that security model will be AWS's
responsibility and less yours. We will always have a certain type of responsibility for our applications. But
we can again use these highle services to ensure that our ability to encrypt the data at rest and in transit are
always maintained by coming up with specific rules that need to be adhered to. And if those rules are not adhered
to then we have remediary actions that take place in order to maintain again that um cross account or
organizationalwide um security control. Okay. So lots of benefits of using IM and let's now take
a look at exactly how IM works and delve deeper into authentication and
authorization. So there are certain terms here that we need to know about. One of them is principle. Now principal
is nothing more than an entity that is trying to interact with an AWS service. For example, an EC2 instance or an S3
bucket. Now that could be a user, an IM user. could be even a service or it could be a role. So we're going to take
a look how principles need to be specified um inside IM policies.
Authentication. So authentication is exactly how it sounds. Who are you could be something as simple as username and
password or it could involve an email address. Example, if you're the root user of the account when you're creating
the account for the very first time. um other normal users will not need to log in with their user uh email. It'll only
be the root user that needs that. We could also have developer type access which needs access let's say via command
line interface or a software development kit also known as an SDK. Now for those kind of access points we're going to
need uh access keys and secret keys or public private keys. Let's say if you want to even authenticate and log in
through SSH to an EC2 instance. So there are many different types of authentication that we could set up when
creating IM users based on what kind of access they need. Do they just need access to the console? Do they need
access to just services in terms of as a programmatic needs? Um so those are different types of
authentication. Then we have the actual request. Now, we could make a request in AWS through various ways. We're going to
be exploring that via the console, the AWS console. However, every button you click, every drop-down list you select
that invokes an API call behind the scenes. So, everything goes behind a centralized well doumented API. Every
service has an API. So, if you're dealing with Lambda, let's say, well, Lambda has
an API. If you're dealing with EC2, EC2 has an API. Now, you can interact with that API, like I said, via the console
indirectly. You can use a command line interface. You can use an SDK for your favorite programming language like
Python. Um, so all of them get funneled through all your requests get funneled through an
API. But that doesn't mean you're allowed to do everything just because we know who you are and you have uh access
to some keys, let's say, to make some API calls. you have to be authorized in order to perform certain actions. So
maybe you're only authorized to communicate with DynamoB and also maybe you're only authorized to perform reads
and not write. So we can get some very fine grained authorization through some IM policies in order to control not only
who has access to our AWS but what they're allowed to do. So very fine grained actions read actions write
actions both uh maybe just describe or list some buckets. So depending on the AWS user that's logged in, we can
control exactly what actions because every API has fine grain actions that we can control through IO and of course
many many different types of resource. When we're coming up with these policies, all these actions can be
grouped into resources. So perhaps we can say well this user only has access or read only access to S3 specifically
these buckets and can write to Dynamob table but cannot terminate or shut down an EC2
instance. So all these kind of actions can be grouped into resources on a per resource basis based on the kind of role
you have in uh the projects or in the company. So now let's take a look at the components of
IM. We've already seen that we can create IM users and that how they are different than your normal end users.
Now those entities represent a person that's working for organization that can interact with AWS services and some of
those permissions that we assign through what's called identity based policies to such a user will have permissions for
example that are always necessary every time that they're logged in. Or perhaps this user Tom always needs access to the
following services over here. For example, an RDS instance or an S3 bucket etc etc. And sometimes they will need
temporary access to other services for example DynamoB. So in those cases there would
be a combination of normal IM policies assigned to the user and certain assume role calls done at runtime in order to
acquire this temporary credential elevated permission. There is also the root user which is different than your
typical super user administrator access. There are some things that a root user can do that an administrator access
cannot. For example, a root user first of all logs in with their email address and can also have the option to change
that email address. Something that is very tightly coupled to the AWS account. So you cannot change the email address
with an administrator account. The other thing your user can do is they can change the support contract. They can
also uh view billing information or taxes information that information for example. So the best thing to do, the
best practice is that once you've created your AWS account, you are obviously the root user. The best thing
to do is to enable multiffactor authentication. uh store away those keys in a secure
location and your first administrative task should be to create an administrator super user from which that
point on you will log in as only an administrative user and you will create other type of administrative users and
those administrative users will create other I am users. So never log into your AWS account as a root user unless you
need to access that specific functionality that I said that is over and above a super user administrator
account. So here we have a set of IM users and we could assign to them directly IM policies which give them
access to certain services. However, the best practice is to create groups which is just a collection of IM users and
suppose we have a developer group and a group of uh security administrators. Of course, we could assign different types
of policies to that group and every user that is assigned to the developer group will inherit those IM permissions. makes
it much easier to manage your IM users this way for um and for the fact that also you can have users that can be
assigned to groups as you can see here but what's not shown here that is possible as well is you could have a
user that is part of two groups at the same time the thing that you cannot have though is the notion of subgroup so I
can't extend this group and create a smaller um group of developers out of this big group here. So we don't have
hierarchal groups but we do allow users to partake in different uh groups at the same time. So they will
inherit those permissions those IM policy permissions from for example if user A was part of developers and user A
was part of security they would inherit these two policy uh statements over here. Now there is another type of IM
policy and that is called a resource-based policy. So resource-based policies are not attached directly to
for example users but they're attached more to uh resources like um an S3 bucket for example and so when you have
such a case a user group can't be designated uh as a principle to a group that's just one of the restrictions
however there are other ways to uh get around that as I said when you think of an I am
role I want you to think that these are temporary credentials s that are required at runtime and are only good
for a specific amount of time. Now, an IM role is really made up of two components. We have what's called
a trust policy. So, who do you trust to assume this role? Who do you trust to even make an API call to assume the
role? After that, we have an IM policy. Once you've been trusted to assume the role and you get those temporary tokens,
what are you allowed to do? Maybe you're allowed to have read write access to the Dynamob.
So here we have no long-term credentials that are connected to RO. You do not assign a role to a user. The user or
service will assume the role via API call via the secure token service STS. And with those temporary credentials,
we'll then be able to access, for example, an EC2 instance if the IM policy attached to the IM RO says that
you can do so. So your users and your applications perhaps running on an EC2 instance and
the AW AWS services themselves don't normally have access to the AWS resources but through assuming a role
they can dynamically attain those permissions at runtime. So, it's a very flexible model
that allow you to bypass the need to embed security credentials all over the place and then have to maintain those
credentials and make sure they're rotated and they're not acquired by anybody trying to find a security hole
in your system. So, from a maintenance point of view, they're really an excellent tool to use in your toolbox,
your security toolbox. So let's take a look at the components of IM. We have the I am policy over here
which is really just a JSON document and that JSON document will describe what this user can do if it's attached
directly to it or if the IM policy is attached to the group. Whatever user is attached to that group will also have
the same policy inherited. So these are more long-term credentials in terms of uh or permissions rather that are
attached to your login for the duration of your login whereas roles are as we've already stated temporary and again once
you've established a trust relationship then we can assign a policy for example a policy that gives you only read only
access to a cloud trail to a role. Now it's up to the user to assume that role either through the console or through
some programmatic means whichever way is performed the user will have to make a direct or indirect call to STS for the
assume role call. Now these permissions are examined by AWS when a user submits a request. So
this is all happens at runtime dynamically. So when you change a policy, it will take effect pretty much
immediately as soon as the user makes the next request, that new security policy, whether you've just added or
removed the permission will take immediate effect. So what we're going to do now is just really quickly go to the
AWS console and actually show you how an IM policy looks like. So, I've just logged into my AWS
management console. And if I search for identity and access management should be the first one to show up. I'll just
click on that. And I want to show you actually how these IM policies look like. If you go to the left hand side
here, there's an entry for policies. And there are different types of policies that we can filter out. So, let's filter
them out by type. We have customer managed, we have AWS managed and more in line with job functions that AWS managed
also. So we're going to take a look at AWS managed. All that means is that AWS has created and will actually update
maintain these pre-established and already vetted um identity policies IM identity policies. So I could at further
down maybe look at all the S3 policies that are available and we can see there are some that allows only read only
access. There are some that will give us full access and there are of course other types here that we're not all
going to explore. But let's just easily kind of take a look here at the full access. You can see it's just a a JSON
document and of course you cannot modify it since this is a managed policy. However, you can copy it and modify it
and in that case it will become a a a your own version of it and so it'll be customer defined and so when you go here
and you u do a filter you can filter on your own customer uh managed policies. So over here we are simply
allowing certain actions right we talked about actions before that were based on API. So here we have an API that starts
with S3. So this is for the S3 API and instead of filtering out exactly specifically what API calls are there
we're basically saying all API calls. So star representing all API actions and again there is another category of S3
which deals with object lambda and we are uh allowing all operations on that specific API as well. We could if this
was our own customer managed ident uh policy actually scale down on which resource which bucket
let's say or which folder in a bucket uh this would apply to. But in this case since this is a full access we're
basically um being given permission to execute every single API action under S3 for all buckets. We're going to see in
the lab or the demonstration at the end of this tutorial that uh I'm going to show
you how to scale that down. Okay. So, that's how an IM policy looks like. And if you want to assign an IM policy to a
user, well, stay tuned for the demonstration at the end of the video tutorial. What's not at the end of the
video tutorial is really how to create a role. So, I'll show you what a role is here. Again, there's a whole bunch of
roles already uh set up that we can go and and take a look at here. But we can go and create our own role as well. And
I might create a role for example that gives uh in EC2, let's choose choose EC2 access to S3. So
if I click uh EC2 and click next permissions, I can actually assign the S3
permission right over here. just an a policy just like we took a look at. Obviously assign it a tag and review it.
And I'm just going to call this my EC2 RO to S3. Create the
RO and because I selected EC2 as a uh service, it automatically will include that in the trust policy. So now when I
go and I create an EC2 instance, I can attach this role to that EC2 instance and that EC2 instance will automatically
be able to if we're using the Linux 2MI, let's say, have access to or full admin access to any S3 bucket uh in the AWS
account. So let's go take a look at where we would actually attach that role on the EC2.
So when you're creating an EC2, let's just say we go here, we say launch an EC2
instance and we'll pick this AMI over here because it'll already include the uh packages necessary to perform the
assume role call to STS for us. So we won't have to code that and we don't have to embed any credential keys uh in
order to get this running which is really good which is the whole point of what I'm doing because by assuming a
role I don't have to manage that and I'm just going to pick the free T2 micro and it is here once you you know pick
whatever you need whatever VPC and whatnot that you come to the IM RO and you select the role to be assumed
dynamically. Now I have a whole bunch of them here but this is the one that I had created here my EC2 role to S3 and if I
go and I launch this EC2 instance and then I had an application let's say that needed to contact S3 or even if I went
on the command line of the EC2 instance I would be able now to communicate with the S3 service because they would use
they being AWS would actually use this role to perform an assume role call to get those temporary STS tokens for me
which will then allow me to access my IM policy which gives me full access to S3 and off I go. So this is a great way
like I said to avoid having to administrate any embedded keys in this EC2 instance and I could take this role
away at any time as well. So there you have it, a little demonstration on how to create a role, how to attach it to an
EC2 instance and also how manage and customer managed policies uh look like in
IM. Of course, there is also the fact that we have multiple accounts and so we could assign a policy that will also
allow a user in another account access to our account. So cross account access is something that's very um very well
needed and also is a feature of IM. So when you need to share access to uh resources from one account to another in
your organization uh we are uh in need of an IM policy to facilitate that. So also roles can come into play here for
that. By also assigning granular permissions, we can make sure that we implement the concept or the best
practice of the um least privilege access and also we have secure access to AWS resources because by default again
we have the principle of lease privilege and no service can communicate with another until we enable it via IM. So
very very secure out of the box. Of course, if you want to add a additional layer of authentication of security, we
have malter multiffactor authentication rather and that'll allow you to ensure through a hardware device perhaps let's
say a hardware token device or your cell phone that has some software that generates a code that will allow to
identify uh you are actually the person that uh is typing the username and password and that's somebody who's
actually stolen your username and password. So at the end of this video tutorial, we will uh show you or I will
show you how to enable that. And of course, identity federation is a big thing where your users may have been
defined outside of AWS. You may have thousands of users that you've already defined, let's say, in an active
directory or in an LDAP environment on prem and you don't want to recreate all those users again in AWS as I am users.
So we can tie in we could link the users uh your user database that's on prem or even users that have been uh your user
account that's been uh defined let's say uh on a social site like um Facebook or Twitter or Google and tie that into your
AWS account and they'll there is an an exchange of information there is a setup to do but uh at the end of the day it is
either a role in combination with an I um a policy that will allow to map what you're allowed to do once you've been
authenticated by an external system. So the this has to do with uh perhaps uh like I said LDAP or active directory and
you can tie that in with AWS SSO. Uh there are many different services here that can be leveraged to facilitate what
we call identity federation. And of course, IM itself is free. So it's not a service per API
request. AWS is very concerned to put security first. And so IM is free. It's just the services that you use. Uh for
example, if you're securing an EC2, of course, that EC2 instance falls under the current pricing plan. So of course,
compliance, like I mentioned, is extremely important, making sure that it's business as usual in the cloud. And
so, for example, if you were using the payment card industry uh data security standard to make sure
that you're uh storing and processing and transmitted credit card information appropriately uh in order to do your
business well, you can be assured that the services you use in AWS are PCIDSS compliance and there's many many
other types of compliance that AWS adheres to as well. So, password policies of course are important. By
default, we do have a password policy in place. But when you go to the IM console, you're free to update that
password policy, make it more restrictive based on whatever uh policy uh you have at your
company. So many features. And so I think we're ready now to go into a full demonstration and I will show you how to
incorporate IM users groups and also multiffactor authentication. So get ready. Let's do
it. I'm going to demonstrate now how to attach an S3 bucket policy via IM. I've already created a bucket called
SimplyLearn S3 IM demo. And what we're going to do is we're going to attach a bucket policy right over here. We're
going to have to edit it of course. And the basis of the demonstration will be to allow a user that has MFA, so
multiffactor authentication set up to have access to the contents of a folder within this bucket. So of course now I
have to create a folder. just simply call it folder one. Create that folder. And now I'm
going to go into that folder and upload a file so that we can actually see uh this in action. So I'm going to select a
demo file that I prepared in advance called demo file user 2 MFA. I'm going to upload
that. All right. And now what's going to happen is eventually when I create our two IM users, one will have access to
view the contents of this file which should look like something very simple. If I open this up within the console,
you'll be able to see well it's pretty small right now. I'll make this a little bit bigger. Of course, it says this is a
demo file which user 2- MFA should only be able to have access to. So what's left to do now is to create two users.
one called user one and one called user 2- MFA. And that's what we're going to do right
now. Let's head over to IM identity and access management. And on the left hand side
here, we're going to click on users. You can see here I don't have much going on. I just got an
administrator user. So I want to create two new users. We'll start with one at a time. So one called user one. Now by
default we have um you know the principle of lease privilege which means this user is not allowed to do anything
unless we give them access to either the console right over here which I will do right now and also if they were a kind
of programmatic user that needed access to um specific keys in order to interact with uh the APIs via the command line
interface or the software development kit. for example, if you were a Java developer. So in this case, that is not
what I want. So I'm just going to assign console access and I'm also going to create a default password for this
user, but I will not allow them to reset it on uh the first login just for demonstration purposes. Now over here,
we have the chance to add the user straight to a group, but I'll only do that later on at the end of the
demonstration. For now, I want to show you how to attach to this user an I am policy which will give them access to
specific services. So if I type in S3 full access here, this is an administrative access to an S3 bucket.
If you take a look at the JSON document attached or representing that IM policy, you can see that it is
allowing all API actions across S3 star and also object lambda across all buckets. And I say all buckets because
the star represents all buckets. So it is a very broad permission. So be very careful when you're actually assigning
this to one of your users. Make sure that they deserve to have such a permission. So I'm not going to assign
any tags here, but I'm just going to review what I've done. You can see here I've just attached an S3 full access
permission to this one user and I'm going to create that user. there. Of course, now is the time for me
to download the CSV file, which will contain those credentials. If I don't do it now, I will not have a second chance
to do this. And also to send an email to this user in order for them to have a link to the console and also some
instructions. I'm not going to bother with that. I'm just going to say close. And now I have my user one. You can see
here that it says none for multiffactor authentication. So I have not set anything up right now for them. And
that's what I want to do for this second user that I want to create called user 2 dash MFA or MFA rather. All right. So
now I want to do the same exact thing as before. I'm going to assign a custom password and I'm going to go through the
same steps as we saw before. I'm going to attach an existing managed policy that AWS has
already vetted for us. Review. Create. Now, right now, they're exactly the same. So, I have to
go back to user 2 and set up multiffactor authentication. And to do that, you have
to go in the security credentials tab. And you'll see here assigned MFA device. We have not assigned any yet.
So, we're going to manage that right now. And we have the option to order and get an actual physical device that's
uniquely made for performing multiffactor authentication. However, we're going to opt for the virtual MFA
device, which is a software that we're going to install on our cell phones, let's say. So, I can actually open this
link just to show you the um very many options that we have for that. right over here. I'm going to go with Google
Authenticator. So, go to the App Store on your Android device and you can download that. Of course, if you have an
Android device, if not, you have other options for iPhone and other softwares for each one of those. So, I'll let you
do that. Once you've installed that software on your cell phone, you can come here and show the QR code. Now,
what that is going to do is you're going to need to go to your phone now. And I'm going to my phone right now as we speak.
And I'm gonna open up that MFA software. In this case, the Google Authenticator. And I'm gonna say there's a little plus
sign at the bottom. I'm gonna say scan QR code. I'm going to point my phone to this square code here, this QR code. And
it's going to pick it up. I'm going to say add account. There's a little button there, add account, and it's going to
give me a code. So, I'm going to enter that code right now. So once I've entered the code, I have to
wait for a second code. And that's because this code on the screen of my cell phone is going to expire. It's only
there for about 20 to 30 seconds. Once that token expires, it refreshes the display with another token. So I have
another token now. That's only good for a couple of seconds again. And now I'm going to say assign MFA. What this does
is it completes the link between my personal cell phone and this user right now. So now that this has been set up,
we can see we actually have an ARN to this device which we are going to possibly need depending on how we write
our policies. In our case here, I'm going to show you a way to kind of bypass having to put that. So now we've
got those two users, one with an MFA device and another one without. Okay. So now the lab is going to be or the
demonstration rather is going to be how do we allow user to MFA access
to this file in folder one and not allow user one. The distinguishing factor will be that user two has an MFA setup and
user one doesn't. So based off of just that condition, that'll be the determining factor if they can view the
file or not. So let's get back to S3 and actually create this IM bucket policy and see how it's
done. So back we are in our bucket and now we have to go to permissions in order to set up this bucket policy.
We're going to click on edit over here and we're going to click on policy generator to help us generate that JSON
file through a user interface. Of course, we're dealing with an S3 bucket policy. Now, what we're going to do is
we're going to deny all uh users and services. So, star uh S3 is grayed out here because we
selected S3 up here. We want to deny all S3 API actions. We're not going to go and selectively go down the list the
specific bucket and the folder. So in this case here, we're missing the Amazon resource name. So we have to go back
here and we have to copy the ARN of our bucket, right? So we're going to copy that. Come
back. Control V. And we're going to of course have to specify the folder as well. So I'm going to just do
slashfolder one and then I'm also going to do slash star for all the objects in that folder. Okay. So here's the MFA
condition that I want to add though because we cannot assign the MFA condition up here. We have to say add
conditions. Very powerful feature here. That's optional but very powerful. We want to look for a condition that says
boolean if exists or bool if exists. All right, so there's got to be a key value pair that exists. And there are many,
but we're going to look for one that is specifically known as multiffactor authentication present. And we want to
make sure that value is equal to false. So I'm going to add that condition. And then I'm going to
add that statement. and I want to generate the policy just to show you now how that user interface has generated
this uh JSON document for us that we're going to attach to our S3 bucket. What I want to do is I want to take this in its
entirety, copy it, and go back to our S3 bucket that's just waiting for us to paste our policy in here. Okay, so let's
see what's actually going on here. We've created a policy. Again, this policy could have any ID. It's just a an ID
that was given by the um user interface. You could name that whatever you want. Of course, something that makes total
sense. And then we have one or more statements. And then we only have actually one statement. And again, you
can select any ID here. That was just autogenerated by the IDE. We're saying that all actions on the simple storage
service. I say all again because of the star are denied specifically on this resource
which is our bucket and the folder named folder one and all the objects within that folder
under the certain conditions. So the condition is if you have a boolean value of false associated with a key called
multiffactor authentication present and it has to exist. Okay. Then if that value is equal to false, you don't have
access to the objects in this folder. Which means that if you do have MFA set up, you will
be allowed access to the contents of this folder. So if we assign this and apply this to all principles which would
encompass user one and user 2- MFA, then let me just apply this. That would mean that user one would not be
able to see the contents of our object inside folder one and user 2-mfa would. So, let's go take a look and log
into let's log out first and then when we're going to log back in, we're going to log in as the individual user one and
user 2 MFA and prove this is actually the case. So, I am now logging back in as
user one that we created moments ago. And of course, if I for example go see a service like EC2, we don't have
access to EC2. We've just given ourselves access to S3. So this makes total sense. We're just going to go see
what we have access to, which is S3. Should be able to list and describe all the buckets there because we've been
given pretty much administrative access. Now, if we go here to our created bucket and go inside folder one and want to
actually open up this document, we can see now that that's not happening. Okay. And that's because of that condition
that is checking if we actually have a value of of true for that um MFA key that we specified. Okay. So, this is
expected and this is exactly uh what we wanted. Now, what we're going to do is we're going to log out once again and
we're going to do the same thing but with user 2- MFA. Let me log back
in and then specify the new password we created. Now, of course, now I have an MFA code,
right? Because I've set that up for user 2. So, back on my cell phone I go. I am opening up the Google Authentic
application. And I have in front of me a new Amazon Web Services code that is only good for, like I said, maybe 20
seconds. It's a little nerve-wracking. If you're if you're not good under pressure, you're going to choke.
So, I've just entered the code and I have to wait for um you know that to actually work. Now, it looks like I
actually didn't check off um not to recreate the the password. So, unfortunately,
uh I must have missed that check mark. So, I wanted to kind of avoid this here. So, I'm going to start over
and create a brand new password. little painful for, you know, demonstrations, but it's good for you
guys to see how that actually looks like. So, because I've set up my MFA device properly, I am able to log in
after I enter that MFA code. And once again, if I go to EC2, now I haven't been given the
permissions for EC2. I've only been given the permissions for S3. So, nothing changes here. I'm going to go to
S3 and I'm going to go into folder one of our bucket in question and I'm going to see if I have access now to
opening up and seeing the contents and here I have the contents. This is a demo file which user 2FA should only be able
to have access to. So that worked. So now you guys know how to create a bucket permission
policy. Uh that is also known as a resource policy because it's assigned to the resources assigned to the bucket.
And let me actually show it to you once one last time. Here it is. And that'll only allow us to have access to the
contents of folder one if we have multiffactor authentication set up in our account. So that's really good. The
last thing I want to show you now is how to maybe uh create IM users uh in a better way in terms of not having to
assign over and over again the same permissions like you saw me do for user one and user two. Uh of course now I
cannot do anything I am. Why? Because I'm user I'm user two. I don't have access to IM. So once again, I'm logging
out and we're going to actually see the best practice in terms of how do I assign IM policies to a group and then
assign users to that group in order to inherit those permission policies. So let's get to it. Let's sign out
first. So let us go back to IM and we're going to be creating our first group. We're going to simply create the
button or click on the button create group and we'll come up with a group called
testers. We have the ability to add existing users to this group. So we will add for example user one only uh to the
testers group and we can also add permissions to this group. So, we're going to go here and add, for example,
the EC2 full permissions, create
group. And what's going to happen now, if we go take a look at testers, we can see we have that one user that we added
and we also have the permissions associated with that group. So this means now that any new users we create
will automatically inherit whatever permissions the group has which is really good for maintainability. So for
example let's say we were to create many new testers. So let's say user three over here. Now of course I'm going to
have to go in this case through the same steps as I did before. I'm just going to remember this
time not to forget to check uncheck that. And instead of attaching policies directly, I'm going to say, well, I want
to inherit the policy permissions that were assigned to the group. So, I'm going to say this user is going to be
part of the testers group. And of course, I'm just going to skip through the remaining repetitive tasks that I've
already described. And this user three now is already going to have the
permissions to access EC2 that they've gained through the group
testers. Whereas if we go back to user one, this user also has the permissions from the group but
has a unique permission added to itself. So in this case you do have the option of doing
that. You have the option of adding a permission over and above that is assigned to the group to yourself
directly. So this is really good um adding permissions to groups when you have of course lots of users in your
company in your organization and instead of going to them and adding permissions after permission you can centrally
access this in one or manage this in one place. So, for example, if I go back to my roles
uh groups rather in my testers and let's say I forgot to add a permission to all the users in this group. So, let's say
there was 20 users attached to this group. Okay, I'm not going to bore you with creating 20 users. We have we have
two now. That's good enough to show the demonstration. And let's say now I need to add another permission, right? So in
this case here I'm going to add a permission uh for example let's say based on
Dynamob right so I'm going to give full access to Dynamob going to add that permission and what's going to happen
now is if I go back to my users for example user 3 I'm going to see that user 3 automatically has inherited that
permission and so has user one, right? Show two more. And there they are. So in this case, it really
allows for you to easily manage a lot of users in that group because you can simply go to one place which is the
group and add and remove policies as you see fit and the whole team will inherit that. So it saves you lots of work and
that is the best practice. So there you have it guys. You have learned in this demonstration how to create users, how
to create groups, how to add users to groups and manage their policies in terms of best practice. And you've also
learned how to attach to an S3 bucket a policy that allows you to permit access based off of if a user has multiffactor
authentication set up. and we showed you how to actually set that up when creating an IM user. So again,
permissions tab is where this all happens. Hope you enjoyed that. Put that to good use and I'll see you in the next
demonstration. So I hope you enjoyed that demonstration. Uh we're just going to wrap things up in a summary and kind
of see what was looked at throughout the tutorial. We first started with what is AWS security. We took a look at how
important it was across our organization to maintain best practice and also uh standardize that practice across
accounts organization. We took a look at the topmost services in which we concentrated then after on IM and we
took a look at what exactly what IM was in terms of IM users, groups, long-term credentials that are applied as IM
policies and then the concept of a role. We took a look at the benefits of IM and the actual terminology used with working
with IM. Of course, principal being an entity that is a either a user or a service itself that can gain access to
an AWS resources. And we actually saw the different types of authentication and
what we can do to implement authorization via IM policies and also through the use of roles. So we took a
look how to organize our users into groups and what actually goes into acquiring a role through a
demonstration. We took a look at how to actually create a role and attach that role to an EC2 instance. And then we
took a look at the highle features of IM which allowed us to either grant access to another IM user
from another group, implement multiffactor authentication like we just did in the
demonstration or ensure ourselves that we are following uh all the compliancy standards that have been uh adhered to
by whatever project we're working on and that AWS is there to support So here we're going to talk about Amazon
ECS, a service that's used to manage Docker containers. So without any further ado, let's get started. In this
session, we would like to talk about some basics about AWS and then we're going to immediately dive into why
Amazon ECS and what is Amazon ECS in general and then it uses a service called Docker. So we're going to
understand what Docker is and there are competitive services available for ECS. I mean you could ECS is not the own and
only service to manage Docker containers. But why ECS advantage of ECS? We will talk about that and the
architecture of ECS. So how it functions what are the components present in it and u uh what are the functions that it
does? I mean each and every component what are all the functions that it does all those things will be discussed in
the architecture of Amazon ECS and how it works how it all connects together that's something we will discuss and
what are the companies that are using ECS what were the challenge and how ECS helped to fix the challenge that's
something we will discuss and finally we have a wonderful lab that talks about how to deploy docker containers on an
Amazon ECS. So let's talk about what is um AWS. Amazon web service in short called as AWS is an web service in the
cloud that provides a variety of services such as compute power, database storage, content delivery and lot of
other resources. So you can scale your business and grow not focus more on your IT needs and the rest of the IT demands
rather you can focus on your business and let Amazon scale your IT or let Amazon take care of your IT. So what is
that you can do with AWS? With AWS we can create deploy any application in the cloud. So it's not just deploying you
can also create your application in the cloud. It has all the tools and services required. The tools and services that
you would have installed in your laptop or you would have installed in your on premises desktop machine for your
development environment. You know the same thing can be uh installed and used from the cloud. So you can use cloud for
creating and not only that you can use the same cloud for deploying and making your application available for your end
user. The end user could be internal internal users. The end user could be the could be in the internet. The end
user could be kind of spread all around the world. It doesn't matter. So it can be used uh to create and deploy your
applications in the cloud. And like you might have guessed now it provides service over the internet. That's how
your users worldwide would be able to use the service that you create and deploy, right? So it provides service uh
over the internet. So that's for the end customer. And how will you access those services? That's again through the
internet. It's like the extension of your data center in the internet. So it provides all the services in the
internet. It provides compute service through the internet. So in other words, you access them through the internet. It
provides database service through the internet over the internet. In other words, you can securely access your
database through the internet and lot more. And the best part is this is a pay as you go or pay only for what you use.
There is no long-term or you know beforehand commitment uh here. Most of the services does not have any
commitment. So there is no long-term and beforehand commitment. You only pay exactly for what you use. There's no
overage. There's no overpaying, right? There's no buying in advance, right? You only pay for what you use. Let's talk
about what ECS is. So before ECS, before containers, right? ECS is a service that manages Docker containers, right? It's
not a product or um it's not a feature all by itself. It's a service that's dependent on Docker container. So before
Docker containers, all the applications were running on WM or on an host or on an physical machine, right? And that's
memory bound, that's latency bound. the server might have issues on and on. Right? So let's say this is Alice and
she's trying to access her application which is running somewhere in her on premises and the application isn't
working. What could be the reason? Some of the reasons could be memory full. The server is currently down at the moment.
We don't have another physical server to launch the application. Lot of other reasons. So a lot of reasons why the
application wouldn't be working in on premises. Some of them are memory full issue and server down issue. Very less
high availability or in fact single point of failure and no high availability if I if I need to tell it
correctly with ECS the services can kind of breathe free right the services can run seamlessly. Now how how is that
possible those thing we will discuss uh in the upcoming sessions. So because of containers and ECS managing containers,
the applications can run in a high available mode. They can run in an high available mode. Meaning if something
goes wrong, right, there's another container that gets spuned up and u your application runs in that particular
container. Very less chances of your application going down. That's what I mean. This is not possible with a
physical host. This is very less possible with an VM or at least it's going to take some time for another VM
to get spun up. So why ECS or what is ECS? Amazon ECS maintains the availability of the application and
allows every user to scale containers when necessary. So it not only meets the availability of the application meaning
one container running your application or one container hosting your application should be running all the
time. So to meet that high availability availability is making sure your service is running 24 bar 7. So container makes
sure that your services run 24 bar 7. Not only that, not only that, suddenly there is an increase in demand. How how
do you meet that demand, right? Let's say you have like thousand users. Suddenly the next week there are like
2,000 users. All right? So how do you meet that demand? Container makes it very easy for you to meet that demand.
In case of VM or in case of physical host, you literally will have to go buy another physical host or uh you know add
more RAM, add more memory, add more CPU power to it. All right, or kind of club two three uh hosts together clustering.
You would be doing a lot of other things to meet that high availability and also to meet that demand. But uh in case of
uh ECS, it automatically scales the number of containers. It automatically scales the number of containers needed
and it meets your demand for that particular R. So what is Amazon ECS? The full form of uh ECS is elastic container
service. Right? So it's basically a container management service which can quickly launch and uh exit and manage
docker containers on a cluster. So what's the function of ECS? it it helps us to quickly launch and quickly exit
and manage docker container. So it's kind of a management service for the docker containers you will be running in
amazon or running in the AWS environment. So in addition to that it helps to uh schedule the placement of
container across your cluster. So it's like this you have two physical hosts you know joined together as a cluster
and ECS helps us to place your containers. Now where should your container be placed? Should it be placed
in host one? Should it be placed in host two? So that logic is defined in ECS. We can define it. You can also let ECS uh
take control and define that logic. Most cases you will be uh defining it. So schedule the placement of containers
across your cluster. Let's say two containers want to interact heavily. You really don't want to place them in two
different host. All right, you would want to place them in one single host so they can interact with each other. So
that logic is defined by us. And these container services you can launch containers using AWS management console
and also you can launch containers using SDK kids available from Amazon. You can launch through a Java program. You can
launch container using an net program. You can launch container using an NodeJS program as in when the situation
demands. So there are multiple ways you can launch containers through management console and also programmatically. And
ECS also helps to migrate application to the cloud without changing the code. So anytime you think of migration, the
first thing that comes to your mind is that how will that environment be? Based on that, I'll have to alter my code.
What's what's the IP? What is the storage that's being used? What what are the different parameters? I'll have to
include the environment parameters of the new environment with containers. Now, that worry is already taken away
because we can create an exact environment. The the one that you had on premises, the same environment gets
created in the cloud. So, no worries about changing the application parameter. No worries about changing the
code in the application, right? You can be like, if it ran in my laptop, a container that I was running in my
laptop, it's definitely going to run in the cloud as well because I'm going to use the same container in the laptop and
also in the cloud. In fact, you're going to ship it. You're going to move the container from your laptop to Amazon ECS
and make it run there. So it's like the same the very same image the very same container that was running in your
laptop will be running in the uh cloud or production environment. So what is docker? We know that it ECS helps to
quickly launch, exit and manage docker containers. What is docker? Let's let's answer that question. What is docker?
Now, Docker is a tool that helps to automate the development of an application as a lightweight container
so that the application can work efficiently in different environments. This is pretty much what we discussed
right before the slide. I can build an application in my laptop or in on premises in a container environment,
Docker container environment. Then anytime I want to migrate, right, I don't have to kind of rewrite the code
and then rerun the code in that new environment. I can simply create an image, a docker image and move that
image to that production or the new cloud environment and simply launch it there. Right? So no compiling again, no
relaunching the application. simply pack all your code in a docker container image and ship it to the new environment
and launch the container there. That's all. So docker container is a lightweight package of software that
contains all the dependencies. So because you know when packing you'll be packing all the dependencies, you'll be
packing the code, you'll be packing the framework, you'll be packing the libraries that are required to run the
application. So in the new environment, you can be pretty sure, you can be guaranteed that it's going to run
because it's the very same code, it's the very same framework, it's the very same libraries that you have shipped,
right? There's nothing new in that new environment. It's the very same thing that's going to run in that container.
So you can be rest assured that they are going to run in that new environment. And these Docker containers are highly
scalable and they are very efficient. Suddenly you wanted like 20 more docker containers to run the application. Think
of adding 20 more hosts, 20 more VMs, right? How much time would it take? And compared to that time, the amount of
time that Docker containers would require to kind of scale to that amount like 20 more containers, it's very less
or it's minimal or negligible. So it's an highly scalable and it's a very efficient uh service. you can suddenly
scale number of Docker containers to meet any additional demand. Very short boot up time because it takes it's not
going to load the whole operating system. And these docker containers you know they use the uh Linux kernel and
features of the kernel like cgroups and namespaces to kind of segregate the processes. So they can run independently
any environment and it takes very less time uh to boot up and the datas that are stored in the containers are kind of
reusable. So you can have an external uh data volume and I can map it to the container and whatever the uh space
that's occupied by the container and the datas that the container puts in that volume. They are kind of reusable. You
can simply remap it to another application. You can kind of remap it to the next successive uh container. You
can kind of remap it to the next version of the container, next version of the application you'll be launching. And you
don't have to go through building the data again from the scratch. Whatever data the container was using previously
or the previous container was using that data is available for the next container as well. So the volumes that the
containers uses are very uh reusable volumes and like I said it's it'solated application. So it kind of isolates by
its nature it kind of by the way it's designed by the way it is created it isolates one container from another
container. Meaning anytime you run applications on different containers, you can be rest assured that they are
very much isolated. Though they are running on the same host, though they are running on the same laptop, let's
say, though they are running on the same physical machine, let's say running 10 containers, 10 different applications,
you can be sure that they are well disconnected or well isolated applications. Now let's talk about the
advantages of ECS. The advantage of ECS is improved security. It's security is inbuilt in ECS. With ECS, we have
something called as a container registry. You know that's where all your images are stored and those images are
accessed only through HTTPS. Not only that those images are actually encrypted and access to those images are allowed
and denied through identity and access management policies AM and in other words let's say two container running on
the same instance. Now one container can have access to S3 and the others or the rest of the others are denied access to
S3. So that kind of granular security can be achieved through containers when we mix and match the other security
products available in Amazon like AM encryption accessing it uh using uh HTTPS. These containers are very
costefficient. Like I've already said, uh these are lightweight uh processes, right? We can schedule multiple
containers on the same node. And this actually allows us to achieve high density on an EC2 instance. Imagine an
EC2 instance that that's very less utilized. That's not possible with a container because you can actually dense
or crowd an EC2 instance with more container in it. So to best use those resources in EC2 straightforward you can
just launch one application but with when we use containers you can launch like 10 different applications on the
same EC2 server that means 10 different applications can actually feed on those resources available and can benefit the
application and ECS not only deploys the container it also maintains the state of the containers and it makes sure that
the minimum set of containers are always running based on the requirement. That's another cost uh efficient way of using
it, right? And anytime an application fails and that has a direct impact on the revenue of the company and easests
make sure that you're not losing any revenue because your application has failed and
easensible services. It's like this in many organization there are majority of unplanned work because of environment
variation. A lot of firefighting happens when we kind of deploy the code from one or kind of move the code or redeploy the
code uh in a new environment. A lot of firefighting happens there, right? This Docker containers are pretty extensible.
Like we discussed already, environment is not a concern for containers because it's going to kind of shut itself inside
a docker container and anywhere the docker container can run the application will run exactly the way it performed in
the past. So environment is not a concern for the docker containers. In addition to that, ECS is um easily
scalable. We have discussed this already and it improves it has improved compatibility. We have discussed this
already. Let's talk about the architecture of ECS. Like you know now the architecture of ECS is the ECS
cluster itself. That's group of servers running the ECS service and it integrates with Docker right. So we have
a Docker registry. Docker registry is a repository where we store all the docker images or the container images. So it's
like three components. ECS is of three components. One is the ECS cluster itself. Right? When I say ECS itself,
I'm referring to ES cluster cluster of servers that will run the containers and then the repository where the images
will be stored. Right? the repository where the images will be stored and the image itself. So container image is the
template of instructions which is used to create a container right so it's like what's the OS what is the version of
node that should be running and any additional software do we need so those question gets answered here so it's the
template template of instructions which is used to create the containers and then the registry is the service where
the docker images are stored and shared so many people can store there and many people can access or if there's another
group that wants to access they can access the image from there or one person can store the image and rest of
the team can access and the rest of the team can store image and this one person can pick the image from there and kind
of ship it to the uh customer or ship it to the production environment all that's possible in this container registry and
Amazon's version of the container registry is ECR and there's a third party Docker itself has a container
registry That's Docker Hub ECS itself which is the the group of servers that runs those containers. So these two the
container image and the container registry they kind of handle Docker in an image format just an image format.
And in ECS is where the container gets live and then it becomes an compute resource and starts to handle request
starts to serve the page and starts to do the batch job you know whatever your plan is with that container. So the
