Unlocking Azure DevOps with the A400 Certification: A Comprehensive Guide
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeIf you found this summary useful, consider buying us a coffee. It would help us a lot!
Introduction
In the evolving landscape of IT, Azure DevOps has emerged as one of the key services for building and delivering high-quality applications. With the A400 Certification tailored for Azure DevOps Engineers, individuals are empowered to design and implement DevOps practices essential for continuous integration, continuous delivery, and reliable application performance. This guide consolidates vital information and topics covered in the A400 Certification course, provided by Andrew Brown, a renowned cloud instructor.
Understanding the A400 Certification
The Azure DevOps Engineer Expert certification, denoted by the A400 exam code, is ideal for IT professionals looking to solidify their skills and establish themselves in the cloud industry. It primarily focuses on:
- Designing and Implementing Processes: Understanding GitHub workflows and Azure boards.
- Source Control Strategies: Implementing branching strategies, pull request workflows, and more.
- Building and Delivering Pipelines: Creating robust CI/CD pipelines.
- Package Management Strategies: Managing code packages effectively using Azure tools.
Who Should Pursue This Certification?
This certification is designed for:
- IT Professionals, including software developers and systems administrators, who want to harness Azure DevOps capabilities.
- Senior DevOps Engineers who need to refresh their knowledge or require a strong formal credential to validate their expertise.
Certification Roadmap
To pass the A400 exam, standard study paths typically include:
- Azure Fundamentals: A foundational course to understand core Azure services.
- Azure Developer Associate: This course enhances your skills for designing and building Azure applications.
- Azure DevOps Engineer Expert (A400): The culmination of your training, focusing on Microsoft DevOps practices.
Study Time Recommendations
- Beginners without Azure Experience: About 50 hours of dedicated study.
- Intermediate to Advanced Users: Approximately 15-25 hours focusing on practical application and hands-on labs.
Exam Preparation Strategies
Hands-On Labs
One of the most effective ways to prepare for the A400 certification is through practical lab exercises. Create your own Azure account to perform tasks such as:
- Setting up CI/CD pipelines
- Implementing source control strategies
- Managing packages and deployments
Case Studies in Exam Simulations
Understanding real-world scenarios through case studies is critical for passing the exam. Ensure that you are familiar with:
- Workflow process management using GitHub and Azure Boards.
- Strategies to ensure code security and compliance.
Practice Exams
Utilize practice exams and simulators that mimic the style and question formats of the actual exams. Focus on areas with high weighting, such as:
- Designing and Implementing Processes – 30% of exam weight.
- Source Control Management and Strategies – 25%.
- Building and Releasing Pipelines – 25%.
Navigating the Exam
Taking the Exam
The A400 exam can be taken from a local test center or online. You’ll need to:
- Register on the official Microsoft Exam website.
- Choose between an in-person or online examination format, with proper identification and examination guidelines.
Scoring and Results
- The exam is scored on a scale of 1 to 1000, with a passing score of approximately 700 out of 1000.
- Expect a mix of the multiple-choice, multi-answer, and drag and drop formats with practical scenarios.
Certification Validity
The certification is valid for one year, after which you can renew it for free within the six-month period before expiration.
Conclusion
Acquiring the A400 Certification in Azure DevOps positions you competitively in the IT job market. With practical hands-on experience, robust study paths, and familiarity with Azure tools, candidates can unlock their potential in DevOps practices within the Azure ecosystem. As you continue on your certification journey, leverage this guide to consolidate your learning, prepare thoroughly, and ensure success in obtaining your Azure DevOps Engineer Expert certification. Happy studying!
hey this is Andrew Brown your favorite Cloud instructor bringing you another free Cloud certification course and this
time it's the a400 this is specifically for the Azure devops engineer um and we're making this available on free Camp
as always so the way we're going to get the certification is by doing labs in our own Azure account uh lecture content
and as always we provide you a free practice exam and I want to tell you that our exam simulator has case studies
which is the most important component when we're looking at these expert certifications with Azure um so if you
want to support more free courses like this one the best way to do that is to purchase the additional study materials
over on exampro doco that's where you get the cheat sheets additional practice exams uh the content is layered um and
again it helps produce these courses if you don't know me I've taught a lot of courses here um I've taught ads Azure
lots of azure uh gcp kubernetes terraform you name it I've taught it so you're in good hands and I will see you
soon okay hey this is Andrew Brown I just wanted to tell you that in this video course I am utilizing my synthetic
voice uh synthetic voices is when you utilize software that emulate your voice the reason why I utilize synthetic voice
is a couple reasons this is when uh the real Andrew not the synthetic voice Andrew has lost his voice and this
happens to me because I have muscle tension dysphonia and so if I use my voice a lot aggressively I can lose my
voice and so I have to uh be careful when I'm recording a considerable amount of content and right now when this video
is being made I am recording a lot of adus content and so you know I've ask my support team to just generate out my
words and Stitch the video together and this the reason for that is that I don't want my content to go stale so when I
create content it has to get shipped uh whether my voice is ready or not um so this is the case for the ac400 otherwise
this course would just go stale and you wouldn't get it for like 6 months to a year but um you know that's the
trade-off that we have when I'm a single content creator and I'm trying to get all this content out so I just want to
point out that the content is made by me it's just utilizing a synthetic voice so it's not like it's somebody else doing
Pro and we'll be going over an introduction of the a400 certification the Azure devops Engineer Expert is an
expert level Microsoft certification for the pre-requisites you must earn at least one of the following the Microsoft
certified Azure administrator associate or the Microsoft certified Azure developer associate the key topics
covered in this course design and Implement processes and Communications such as GitHub flow and Azure boards
design and Implement traceability and flow of work configure collaboration and communication designed and Implement a
source control strategy such as branching strategies pull request workflows design and Implement build and
release pipelines design and Implement a package management strategy like GitHub packages develop a security and
compliance plan and Implement an instrumentation strategy like Azure Monitor and log analytics so who is this
certification for the certification is designed for individuals who are interested in learning how to design and
Implement devops practices for continuous integration continuous delivery and infrastructure is codee you
may consider this certification if you are new to devops and want to learn the fundamentals and benefits of devops
practices you are a software developer systems administrator or IT professional you want to understand the capabilities
of azure devops and GitHub including building pipelines implementing Source control strategies and managing security
and compliance you are a senior devops engineer or in a related role who needs to reset or refresh your knowledge after
working for multiple years so what's the Azure devops Engineer Expert road map like well the most common route that
people take to reach the devops Engineer Expert is to start at the Azure fundamentals it's not mandatory but it
helps build a solid foundation then you take the Azure developer associate for Designing building testing Azure
applications and eventually take the Azure Dev Ops Engineer Expert another common path is to take the Azure
administrator associate and then the Azure Solutions architect you can also take the Azure Solutions architect after
the devops Engineer Expert to further enhance your Microsoft Azure skills and widen your career prospects other
popular associate level certifications may include the aszure AI engineer Azure database administrator and the Azure
security engineer and many more so that's a general outlook on the road map to Azure devops Engineer Expert how long
this study to pass for beginners so if you've never used Microsoft Azure or any cloud provider have no prior experience
with devops practices or no Tech background or experience you're looking it around over 50 hours you shouldn't
take this exam if you're a beginner you'll need to pass the prerequisites and build a solid foundation if you're
experience with Microsoft Azure or any Cloud providers have experience with devops practices and tools and have a
strong background in technology you're looking at about 15 hours the average study time is about 25 hours you should
dedicate around 50% of the time to lecture in labs and 50% of the time to practice exams we recommended to study
around 1 to 2 hours a day for 20 days what does it take to pass the exam watch video lecture and memorize key
information do handson labs and follow along within your own account do paid online practice exams that simulate the
real exam sign up and redeem your free practice exam exam guide content outline the exam has a total of five domains
each domain has its own waiting this determines how many questions in a domain that will show up skills measured
design and Implement processes and communic ation design and Implement a source control strategy design and
Implement build and release pipelines which consists of 50 to 55% of the course develop a security and compliance
plan Implement an instrumentation strategy where do you take the exam you can take the exam at an inperson test
center or online from the convenience of your own home you can use CER aort or Pearson view a proctor is a supervisor
out of 1,000 you need to get around 70% to pass Microsoft uses scaled scoring there are about 50 to 55 questions you
can afford to get roughly 12 to 14 questions wrong there is no penalty for wrong questions form bet of questions
multiple choice multiple answer drag and drop yes and no keep in mind that there's usually one labp with about
eight questions that you do on the Azure portal and the exam is open book but you can only access the Microsoft
documentation is the resource the exam duration is 2 hours you get about 2 minutes per question exam time is 120
Minutes C time is 150 minutes C time refers to the amount of that you should allocate for the exam it includes time
to review instructions Show online Proctor your workspace read and accept NDA complete the exam provide feedback
at the end the certification is valid for one year you can renew the certification for free Within 6 months
or before the expiration date so that's an introduction to the Azure devops engineer expert
quick overview of the exam guide you can find the exam guide by searching for study guide for exam a400 on Google so
as we scroll down it will show you the five domains covered and it'll be broken down into more sections I won't be able
to go through all of it so I'll just go through some of the key topics that I think you should focus on for the exam
design and Implement a structure for the flow of work including GitHub Flow Design and Implement integration for
tracking work including GitHub projects Azure boards and repositories you need to know the flow of work such as cycle
times time to recovery and lead time configure release documentation including release notes and API
documentation design and Implement a strategy for managing large files including get large file storage and get
artifacts design and Implement quality and release Gates including security and governance select a deployment
automation solution including GitHub actions and Azure pipelines design a deployment strategy
testing Implement feature flag Flag by using azzure App configuration feature manager design and Implement desired
State configuration for environments including Azure automation State configuration Azure resource manager
bicep and Azure autom manage machine configuration Implement and manage GitHub authentication including GitHub
apps G token and personal access tokens Implement and manage Secrets keys and certificates by using Azure key Vault
automate container scanning including scanning container images and configuring an action to run codic L
tools configure collection of telemetry by using application insights VM insights container insights storage
insights and network insights inspect distributed tracing by using application insights interrogate logs using basic
custom query language queries so that's a quick overview of the exam guide for the a400
most important question first what is devops devops is an approach that brings together software development and it
operations with the goal to enhance the speed and reliability of software delivery it focuses on continuous
Improvement Automation and collaboration between teams that were once siloed aiming to shorten the time from
development to operation the process includes frequent code versions which allows for for incremental improvements
to applications and systems the ultimate goal of devops is to create a culture and environment where building testing
and releasing software can happen rapidly frequently and more reliably so why devops devops eliminates the
inefficiencies miscommunications and delays that arise from the traditional gap between development and operations
teams it creates a collaborative culture that accelerates and improves software delivery some of the key challenges
addressed by devops include this communication and collaboration gaps enhances communication and collaboration
reducing misunderstandings and accelerating the release process conflicting goals aligns the goals of
Dev and Ops teams towards quick reliable and high-quality software delivery manual processes in Bottle X advocates
for automation to decrease manual effort errors and delays and streamline processes automation leads to fewer
errors shorter deployment times and improved software quality so what's the role of a devops engineer a devops
engineer facilitat this collaboration in automation focusing on continuous integration and continuous delivery
establishing pipelines that automate code integration testing and deployment ensuring rapid Reliable Software
releases infrastructure is code managing and provisioning infrastructure through code to increase efficiency and
consistency monitoring and operations implementing Monitoring Solutions to track application and infrastructure
performance ensuring High availability and reliability transition to Cloud infrastructure many organizations are
transitioning to Cloud infrastructure such as a WS Google cloud or Azure to cut costs and improve manageability
offering intuitive tools for network and security settings but necessitating knowledge of platform specific features
some of the tools and technologies that will be used in Dev Ops are Version Control such as get essential for
managing code changes and facilitating team collaboration agile and lean techniques for planning Sprint isolation
and capacity management containerization such as Docker enables scalable deployments with lightweight containers
that are faster and simpler to configure than traditional virtual machines orchestration like kubernetes
efficiently manages containerized applications that scale CI CD tools such as Jenkins and get lab CI automate the
software delivery process from code integration to deployment IAC tools like terraform and anible automate the
provisioning and management of infrastructure monitoring and logging such as Prometheus provides insights
into application performance and operational health and public and hybrid Cloud streamline operations offering
scalable infrastructure with iOS for Seamless app migration and platform as a service to enhance productivity through
sophisticated tools some examples of devops Technologies across the different devops stages mainly related to
Microsoft Azure include for planning we have Azure boards GitHub and alassian jira continuous integration Azure repos
GitHub repos sodar queet selenium owp new get and npm continuous delivery Azure pipelines GI Hub actions bicep
terraform Jenkins Red Hat anible chef and puppet operations Azure monitor Azure Automation and Microsoft powerbi
and for collaboration and feedback there's Azure devops wikis GitHub wikis GitHub discussions Microsoft teams and
slack overall devops revolutionizes it by merging development and operations enhancing delivery speed and fostering a
culture of Rapid continuous innovation [Music] the next topic we'll be covering are the
differences between devops and traditional it in terms of time devops teams spend onethird more time improving
systems to avoid Tech issues than traditional it less time is needed for administrative tasks because devops uses
more automated tools and helpful scripts this save time allows for a 33% increase in enhancing their Tech infrastructure
they also have 15% more time for Learning and training boosting their skills for Speed and data Dev op groups
are typically small and adaptable driven by creativity and speed one of the main goal of devops is agility aiming for
Swift completion of tasks traditional it operations typically have less feedback data focusing only on the immediate task
it operations often have to handle unexpected Downstream issues they didn't see coming cloud devops is more
effective in delivering business applications due to its quick Pace traditional it must strive to keep up
with the rapid changes and demands of the business World regard St ing recuperation and crunch time devops
teams focus on Readiness for failures and have strategies like ongoing testing and realtime alerts these strategies
mean they can address issues quickly and keep systems running smoothly traditional it may need more time to
recover from setbacks because they might not of these proactive measures in place fast recovery and devops has often
helped using automated systems and flexible infrastructure setups for software distribution devops teams take
roughly 37 minutes to deploy software traditional it operations typically need about 85 minutes for the same task this
teams next we'll quickly go over a few key aspects that devops has an advantage over traditional it product reliability
reduce likelihood of failure adaptability enhance flexibility and support Market responsiveness decrease
time to Market team productivity greater efficiency in teams Vision clarity more defined product Vision within teams so
agile and Agile development agile is a philosophy and software development that emphasizes incremental progress
collaboration and flexibility it revolves around the idea of breaking down large projects into smaller
manageable sections called iterations or Sprints teams work in these short bursts to produce tangible results regularly
allowing for frequent reassessment and adjustment this approach enables a quick response to change and promotes
continuous Improvement both in the product and the process used to create it the term agile methodology refers to
the specific Frameworks and practices that embody the agile philosophy such as scrum and campin these methodologies
provide the structure and tools for teams to execute agile principles effectively they include techniques for
planning and tracking progress such as standup meetings Sprints and visual boards all designed to enhance team
coordination and project transparency Agile development encompasses various methods that follow
the agile Manifesto core ideas it's about teams working together managing themselves and using practices that best
suit their Project's needs to gradually improve their software in Agile development teams aim to produce fully
working and highquality parts of the software at the end of every Sprint this means they must write code test it and
make sure everything is of good quality within each Sprint short time frame the key success factors for Agile
development teams include diligent backlog refinement integrating early and often and minimizing technical debt
diligent backlog refinement this means organizing the list of upcoming work prioritizing the most important tasks
and clarifying them product owners are key in preparing for future Sprints by providing clear goals integrating early
and often by using continuous integration continuous delivery teams automate their workflows which speeds up
coding testing and deployment this helps catch and fix problems early minimizing Tech technical debt just like unwanted
financial debt technical debt happens when taking shortcuts which may later require code fixes it's important to
find a good mix of adding new features and fixing these issues needing careful planning and discipline so that's an
from exam Pro and in this section we'll be going over two popular agile Frameworks or methodologies called scrum
and camben scrum is an agile framework designed for managing complex projects by breaking them down into small
manageable tasks completed in short phases called Sprints the key roles in scrum include a product owner guides
what and why the team builds prioritizes the work backlog a scrum Master facilitates scrum processes supports
team Improvement and removes obstacles and a development team Engineers the product ensuring its quality in scrum a
team self- manages its Sprint tasks with daily standup meetings to ensure progress and address impediments they
track work using a task board and a Sprint burndown chart and at the Sprint's end they showcase their
increment in a review and identify improvements in a retrospective scrum short repeatable Cycles facilitate
continuous learning and adaptation making it a practical framework for teams adopting agile
principles on the other hand campin is an agile methodology focused on visualizing work limiting work in
progress and maximizing efficiency Cam and boards are used to display work at various stages of the process using
cards to represents tasks and their stages highlighting work in progress and facilitating team flexibility cumulative
flow diagrams visually track a Project's workflow over time showing task distribution across stages the
horizontal axis represents time and the vertical axis represents task volume with each color marking at different
work stage cfds highlight Trends progress and bottlenecks parallel colored areas indicate balanced workflow
bulges suggest bottleneck needing attention for smooth project continuation let's go over a quick
comparison between scrum and cambon while broadly fitting Under the Umbrella of Agile development scrum and cin are
quite different scrum focuses on fixed length Sprints while cin is a continuous flow model scrum has defined roles while
cambon doesn't Define any team roles scrum uses velocity as a key metric while cin uses cycle time teams often
blend scrum and cambon features to optimize their workflow they continuously refine their approach to
the next topic we'll be covering are some of the key flow metrics you'll need to know for devops processes and for the
exam starting with velocity velocity and Azure devops is a metric that tracks the amount of work a team completes during a
Sprint helping teams estimate how much work they can handle in future Sprints it's represented in a chart that
visualizes work items completed over several Sprints offering insights into the team's work patterns efficiency and
consistency by analyzing velocity teams can adjust their planning for better predictability and productivity
consistent velocity metrics can help at identifying the impact of process changes and guiding strategic decisions
to enhance overall team performance next we have Sprint burndown chart the Sprint burndown is a graph
that plots the daily total of remaining work typically shown in hours the burndown chart provides a visual way of
showing whether the team is on track to complete all the work by the end of the Sprint it also helps in identifying any
bottlenecks or issues in the workflow that may need attention before the Sprints end moving on to lead time and
cycle time the lead time and cycle time widgets indicate how long it takes for work to flow through your development
pipeline lead time measures the total time elapse from the creation of work items to their completion cycle time
measures the time it takes for your team to complete work items once they begin actively working on them the following
diagram illustrates how lead time differs from cycle time lead time is calculated from work item creation to
entering a completed State cycle time is calculated from first entering an in progress or result State category to
entering a completed State category these measures help teams plan spot variations in efficiency and identify
potential process issues the lower the lead in cycle times the faster the throughput your team has so these are
exam hey this is Andrew Brown from exam Pro and in this section we'll be covering Azure board boards Azure boards
is a web-based service designed for planning tracking And discussing work throughout the development process
supporting agile methodologies for a customizable and efficient workflow key hubs and Azure boards Azure boards
include several key hubs each serving distinct project management needs work items Hub manage work items based on
specific criteria boards Hub visualize workflow using cards ideal for cambon the backlogs Hub plan and organize work
items including backlogs for project and portfol folio management Sprints Hub handle Sprint specific work items
incorporating scrum practices queries Hub generate custom work item lists and perform bulk updates delivery plans Hub
track cross team deliverables and dependencies in a calendar view analytics views Hub create powerbi
reports for detailed project analysis hey benefits of azure boards include scalable Simplicity easy to
start with predefined work item types scalable for growing teams visual tools VIs ual I progress with Canin boards
scrum boards and delivery plans customization configure boards task boards and plans including custom Fields
built-in communication capture real-time communication and decisions within work item forms cloud storage support for
Rich Text inline images attachments and comprehensive change history efficient search and notifications tools for quick
work item searching and customizable alerts dashboards and analytics access to dashboards and analytics service for
for reporting integration and support GitHub and office integration connects with GitHub repositories and supports
import export with Microsoft Office autonomous team support tailor to Independent teams integrates with
Microsoft teams in slack and offers a variety of marketplace extensions so that's an overview of azure
boards the next topic we'll cover is traceability traceability allows tracking connections and dependencies
among different parts of a software system it helps teams grasp the effects of changes handle risks and comply with
regulations defining and managing requirements a key part of traceability is documenting and overseeing
requirements effectively Azure devop says tools like Azure boards for handling requirements and tracking their
progress linking requirements to related items like tasks or bugs this tracking clarifies each requirements progress and
its influence on the project Version Control and change management for Trace ility a solid Version Control System to
monitor modifications to code in files is essential Azure Dev opsis get repositories let developers manage their
work efficiently by using branches for features or releases you can track changes and understand their role in the
Project's bigger picture building and release management traceability must include build and
release processes Azure pipelines facilitates building testing and deploying apps linking build artifacts
test management and quality assurance for software quality traceability is crucial tools like Azure test plan
support detailed test management linking test cases to requirements or user stor shows how well the testing process
covers the initial needs ensuring thorough validation auditing in compliance traceability also supports
meeting standards and regulations Azure Dev ops's auditing features track and log changes providing details on who
changed what and when supporting accountability and Regulatory Compliance overall by setting up a clear
traceability system organizations can make sure that any changes during the software development process are
exampro and in this section we'll be going through how to get started with Azure devops and some of the basics of
azure boards so the first thing you want to do is search for Azure devops on Google then you want to click on the
link that leads you to the Azure devops page which is used the first link on this page you want to click on the TR
for free button I'm assuming everyone already has a Microsoft account or Microsoft Azure account already set up
otherwise you wouldn't be taking the a Z400 level expert certification if not you should create one before clicking
password and enter in the authentication code if you have one now you'll want to sign up and
Canada I'll name the organization something like exam Pro one you can name this whatever you
to do is to create a project so we'll name this something like exam Pro test of course you can name this whatever you
exam Pro test project on Azure devops so here you can see the overview so we'll quickly go through
artifacts we'll be going through most of these in the course so that's how to get started with
exam Pro and in this section we'll be covering how to create or add new users in your Azure devops
organization the first thing you want to do is to go to organization settings after that you want to click on policies
under the security category under the user policies you want to toggle and turn on external
guest policies this will allow you to invite users from outside the organization to access and collaborate
on your Azure devops projects and resources after that you want to click on users under the general category on
the right side you want to click on ADD users here is where you can add new users or service principles so for
example We'll add Cindy at exam Pro . Co we'll keep the access level to basic we'll want to add the user to the exam
Pro test project we created earlier we can also set a role for the user such as project readers project
contributors or project administrators but we'll leave it at project contributor for now then click on ADD
after a short wait the user should be added to the organization the user is sent an invitation to join to org and
they'll have to accept to join We'll add another user this time it'll be Peter exampro
doco we can keep the access level at basic add the user to the exam Pro test project and this time we'll assign the
user the project administrator's role then click on add another thing you can do is add
members to a specific project so from the projects tab you can click on the project exam Pro
and we'll search for Peter exampro doco click on the user and then click on the save button
below and there we go the user is now added to the exam Pro test project team so that's a general overview on how you
is how to create work items so so first you'll need to be at the boards Tab and then you'll need to click on work items
on the top right here we'll click on new work item we have three options here there's epic issue and task epic is
simply a large body of work that can be broken down into smaller more manageable pieces of work this is also known as
user stories so we'll click on Epic as an example now we'll have to fill out some fields to define the work item so
starting with the title we'll call it something like test new login feature right below it we can assign people to
the item this can be one or many but we'll select only one for this example so let's choose Andrew Brown for the
state we'll leave it at to-do for the area it's already set at exam Pro test the iteration is set to exam Pro test
it's about for this example we can write something simple like conduct a series of tests on the new login features
for the priority we can adjust the importance of the work item one being highest priority and four is the lowest
can just use the current date as of this recording for the tags they already have some suggestions for us so we'll use
testing login feature and security which matches the item we don't really need to set the
link for this example so we'll click on the top right and hit save after that we can head back to the work
items page and we should see the work item we just created with all the information we provided for it such as
boards this is an easier way to visually view the items so we have three columns that work items can be placed in to do
doing and done which are all pretty self-explanatory on the top right here we can filter to epics or
issues and and we can drag and drop the work item from to-do to doing and eventually we can place it and done when
the item is complete so that's a general overview of how to create a work item in Azure
quickly go over how to create a Sprint first on this page we have three work item examples that were created
beforehand and we'll want to click on the Sprint tab on the board section here we don't have any Sprints created yet so
we'll need to create a new Sprint by clicking on the top right we'll need to give the Sprint a name so let's just
call it Sprint one and we'll need to identify a start and end date for the Sprint so we'll start it on Monday April
15th 2024 and end the Sprint on Monday April 22nd 2024 so that's one week length then click on
create next we can click on the schedule work button from your product backlog or create new work
items on the right we have our Sprint one and we can drag and drop the work items to include them in a Sprint so
let's drag some of the work items created earlier into Sprint one we can also create new Sprints by
clicking on the new Sprint button below Sprint one so for this Sprint we can call this Sprint two and for the date
two so if you click into Sprint one you can see that we added two of the work items on the backlog from
left under the board section click on Project configuration and as an example we'll delete Sprint 2 so we'll click on
the three dots next to Sprint two and keeping it at exam Pro test is fine then we'll click on
delete so after that we can go back to the backlogs or Sprints and we can simply add the other
backlog capacity and analytics we'll quickly go to capacity we can assign days off activity and capacity per day
we'll just assign an activity for both of the users such as deployment and design and for the analytics tab there's
the burndown trend that shows a visual graph of data such as the amount of work that has been completed in a project
versus the total amount of work that was planned so that's pretty much a quick overview of how to create a Sprint and a
to connect Azure boards to GitHub first you want to click on the project you're working on then on the bottom you want
to click on Project settings and under the board section click on GitHub connections after that under the connect
GitHub with Azure boards click on connect your GitHub account next thing you'll need to do is to log into your
GitHub account so sign into account using your username or email address or password and click on side on
after confirming the information click on authorize Azure boards now you'll need to select a
GitHub repository that you may want to use with your Azure boards click save after choosing
that then after confirming all of this information click on approve install and authorize you can choose to add more
repositories or remove them you can remove connections as well you can also add new GitHub connections as well so
going through an overview of custom Azure boards dashboards centralize with custom dashboards custom dashboards and
Azure boards are crucial for presenting a comprehensive overview of your project status and key metrics by tailoring
these dashboards to highlight crucial data your team can streamline workflows and improve decision- making customize
with widgets widgets are the heart of azure boards dashboards presenting diverse data from progress charts to
work item queries select and tailor widgets that best display the team's critical information ensuring essential
insights are readily accessible monitor backlogs with query widgets incorporate query widgets to filter and display work
items based on defined criteria like outstanding tasks per team member this enables efficient task management and
helps in setting clear priorities track progress with burndown charts use burndown chart widgets to graphically
track project progress helping to identify any delays regular review of these charts keeps the team's progress
aligned with project goals visualize performance with charts enrich your dashboard with charts that convey
performance metrics such as bug Trends or team velocity providing a clear picture of the team's Dynamics and
highlighting areas for improvement enhanced team engagement share dashboards with your team and
stakeholders to offer a live view of the project status fost a culture of transparency and Collective
accountability the image on the left shows an example of a dashboard customized to the way of the devops team
or stakeholders this shows information such as the velocity Sprint burnd down backlogs completed and active work items
and so on so that's an overview of custom Azure boards dashboards the next topic we'll be
covering is Wiki for documentation Wiki offers a collaborative space for team members to compile and share crucial
details about a devops project here's a simple guide to leveraging wikis for Effective project documentation start
with an overview page Begin by setting up an overview page this should introduce the project its goals and the
team working on it mention the Technologies tools and methods your project employs keeping it broad but
informative detail project requirements dedicate pages to outline the Project's requirements break down what the project
needs to do and how it should perform using clear and achievable language add user stories what needs to be true for
the project to be considered complete and any other elements that rely on each other architecture and design
documentation use the wiki to detail the project structure and design make a separate page for each part whether it's
a component a larger section or a service to help visualize how these parts interact include diagrams like uml
or system architecture sketches encourage team input get your team involved in the documentation process
allowing everyone to edit and update Wiki pages not only promotes teamwork but also helps keep the information
needed so let's take a quick look at where this is in Azure devops so at the overview section we'll need to click on
Wiki and we already created an example Wiki with proper documentation so this is what it'll pretty much look like
using markdown we can click on the edit button on the top right this will allow you to edit the wiki to your
fitting you can also create more than one Wiki if you want so we can name it like example Wiki
2 so that's an overview and guide to using Wiki for documentation the next thing will'll be
covering our process diagrams for documentation process diagrams are visual guides that show the steps in a
process making it easier to see how everything connects especially in devops projects here's a simplified guide on
using them effectively pinpoint essential processes first identify the main processes in your devops project
such as managing source code integrating changes continuously testing automatically deploying and monitoring
break down these processes into smaller parts make flowcharts or bpmn diagrams which stands for business process model
and notation use software like Microsoft Visio draw. or Lucid chart to create diagrams that map out the process these
diagrams should clearly show where the process starts and ends include decision-making points and outline the
steps in order these visual tools are effective for mapping out the workflow making complex processes easier to
understand and follow on the right we have a process diagram or flowchart that outlines the customer support procedure
it begins with a ticket submission followed by case assignment if during business hours the support team responds
otherwise and on call technician is alerted at assign tickets prompt reminders while assigned ones are
prioritized for review and resolution by the support team the process Cycles until issues are resolved culminating in
ticket closure and a follow-up email detail what goes in and comes out for every step in your process note down
what you need to start which are the inputs and what you expect to get out of it which are the outputs this might be
Code test results deployment packages or anything else relevant it's important to show how each step is linked to the next
clarify who does what make sure your diagrams indicate who is responsible for each step this removes confusion and
makes sure everyone knows their responsibilities so that's a quick overview of process diagrams for
documentation the next topic we'll be covering is configuring release documentation release documentation is a
Cornerstone for the successful deployment of software releases within Azure Dev Ops focusing on the non-code
aspects that Define the scope quality and functionality of the release here are the key elements of release
documentation release notes these should highlight what's new what is issues have been resolved and any enhancements made
as well as outline any modifications to settings and their effects on existing features installation guides provide
clear detailed instructions for the setup process including a list of required software and system
prerequisites and post installation actions configuration changes document updates to configuration settings
clarifying any default settings and essential changes change log keep an accurate record of commits or work items
in the release using a consistent tracking method roll back plan and have a clear predefined plan for reverting to
an earlier software version if necessary creating release documentation in Azure Dev Ops Azure repos store your
marked out or text files alongside your code Version Control your documentation for consistency and traceability Azure
pipelines automate the generation of change logs and other documentation during the build and release processes
artifacts attach generated documentation to specific builds or releases as downloadable artifacts Wiki utilize the
built-in Wiki to share detailed guides and notes with the team and stakeholders on the right we have an example of a
release notes entry in Azure devops which displays all of the key Elements shown earlier this includes the new
features enhancements configuration changes node issues and roll back plan so that's an overview of configuring
documentation properly configured API document m mentation is essential for developers and stakeholders in
understanding and interacting with software interfaces this guide highlights the key steps and best
practices for creating and managing API documentation in Microsoft devop Solutions steps to generate API
documentation generate documentation utilize Visual Studio to generate API documentation access this feature via
the bill menu use tools like Swagger Azure API management or open API for automatic documentation generation from
your codebase documenting endpoints clearly Define and describe each API endpoint detailing the purpose and
functionality include information on request and response formats as well as any authentication requirements
selecting formats and styles decide on your output format and style ensuring it's readable and accessible for your
target audience integration and automation integrate documentation generation into your continuous
integration and deployment pipelines within Azure Dev Ops on the right we have an example of an API
documentation this API documentation details two endpoints for version 1 two 3 of a service the first endpoint is
post API login which authenticates users and returns a token upon successful login it requires a username and
password in the request body the second in point is get API users which retrieves a list of users both in points
provide example responses indicating successful operations with a 200 okay status best practices for API
documentation easy to follow Clarity ensure that descriptions are clear and concise avoiding ambiguity Version
Control manage your API documentation within Azure repos for versioning and historical tracking regular updates keep
the documentation current with every release deprecating outdated information prly feack mechanisms include a process
for developers and users to provide feedback on the documentation for continuous Improvement
by focusing on these elements your API documentation will be an invaluable resource for your team and stakeholders
supporting the effective use and integration of your software's API so that's an overview of API
documentation with the rise of devops and get stronghold inversion control the manual slog of updating docs is given
way to automation now developers can create Dynamic documentation straight from their G history here's a guide on
how how to automate documentation using Azure Dev op Solutions and its Azure pipelines feature three requisites a git
repository hosted on platforms like GitHub or Azure repos an Azure devops account connected to this repository
automating documentation with Azure pipelines step one set up your pipeline in Azure Dev Ops select pipelines from
the project menu and click new pipeline take your code repositories platform in the repository itself choose the main
branch as the source for your Docs tailor your pipeline settings pick the right agent and decide when this
docs step two build the code and insert a build task into your pipeline to compile your code this can be net core
node.js Python and many more fine-tune this task to match your project this might mean different commands or scripts
depending on what you're building confirm a successful build before moving on step three generate the documentation
post build selected tool like docx tailored for net projects to parse your G history into documentation add a new
task in your pipeline for docx set this up with the correct paths and configurations and Let It Craft your
docs step four publish your work once your documentation is ready pick a spot to publish it this could be Azure blob
storage an FTP server or Azure pipeline's own artifact storage add a publishing task to the pipeline and
configure it with the necessary details deploy this task and see your document ation go live step five make it
automatic to really put your feet up configure triggers and Azure pipelines to run your documentation job on
autopilot you can set these to activate on new commits merges or even on a schedule once set your documentation
updates as your code does no extra input needed so this is a simplified overview for automating get history documentation
Pro and in this section we'll be going over what are web hooks web hooks are userdefined HTTP callbacks triggered by
specific events like code pushes or comments on a blog when an event occurs The Source site makes an HTTP request to
a configured URL this allows for automated actions such as data transfer notifications or initiating other
workflows how web books work event occurs a specific event triggers the webook this event could be an update a
deletion or some activity like a user action or system event http request the source site makes an HTTP request to the
web books URL this request can be a post which is the most common get or any other HTTP method depending on what was
configured action taken the server that receives the webbook does something with the information like updating a database
notifying users or initiating other workflows some of the common uses of web hooks include automating workflows web
hooks can automatically update a testing server deploy applications or update a backup notifications they can notify
other systems or services in real time when events happen for example if someone posts a comment on a Blog a
webook could automatically tweet the comment or send an email Integrations many services offer web hooks to
integrate with other services without requiring a custom interface for example PayPal uses web hooks to notify your
accounting software when you receive a payment advantages of web hooks efficiency web books offer a more
efficient method for receiving data than continually pulling a service for updates they push data as a becomes
available minimizing latency and reducing the amount of bandwidth used realtime processing web hooks can
facilitate real-time data processing by triggering a reaction immediately after the event occurs so that's a quick
webs and Azure devops trigger HTTP notifications to a URL for events like code updates or build completions
facilitating integration with other systems so let's go over some of the steps to configure notifications with
web hooks select the event navigate to the project settings and then to the notifications tab as shown in the image
on the right identify the event you want to track for instance if you're interested in when a bill completes you
would select that event new subscription click on new subscription to create a new webook select the specific event you
want such as build completes configure action define the action that should happen when the event occurs this
customize what information you send along with the webook Azure devops allows you to send specific data related
to the event authentication if needed if your inpoint requires authentication you will need to configure the appropriate
headers or payload with authentication tokens or Keys test the subscription once configured it's crucial to test the
webook to ensure it works as expected Azure devops typically allows you to test it through the interface Monitor
and adjust after after setting up monitor the notifications and ensure they're firing correctly you might need
to troubleshoot or adjust settings if you're not receiving the notifications as expected so that's a quick and
hooks hey this is Andrew Brown and we are taking a look at Version Control Systems which are designed to track
changes or revisions to code and there's been a lot of software over the years that helped us do that we had CVS abers
mercal and get so back uh in 1990s when we got CVS though even though we had it I don't
think a lot of companies were using it it took some time to adopt if you ever heard of like Doom or Wolfenstein you'd
be uh interested to learn they didn't use Version Control Systems and what they would do is they would literally
copy files onto floppies and hope that they don't lose their files but of course a Version Control Systems makes
it really easy to not worry about losing floppies or CDs or drives because they keep track of all the history then came
sub version in 2000 but the real game Cher was in 2005 when we were introduced to a new type of version control system
and we had Mercurial and get um but the key difference between the old ones and the new ones was the old ones were
centralized and the new ones were decentralized and these decentralized ones became very popular for very
specific reasons they had full local history and complete control of the repo locally they were straightforward and
efficient for branching and merging which was a really big deal uh better performance improve fall tolerance
flexible workflows work fully offline um and out of the two git was the one that won and there are reasons for that we'll
talk about that when we look at version control services um but uh yeah git is the one that everybody is using today
and that's why we are taking this course I just want to point out they going to come across a lot of terms that sound
like trees tree trunk branches um the reason for this is that Version Control represents um uh the revisions or
changes in a graph-like structure you can even say a dag um if you're familiar with that and so uh you know you'll see
these terms and we're not talking about real trees we're talking about uh the components of a Version Control so there
taking a look at git so git is a distributed Version Control System a dvcs that's going to be hard to remember
and it's created by lonus Toral if you've ever seen that name before you might know that lonus is the creator of
the Linux kernel but he is also the creator of git and git right now resides with the Linux Foundation which I
believe is a nonprofit set up by Linus as well or has some part to do with it where a lot of open- source projects
reside um but you know I don't really want to focus on that I want to focus on the practicalities of git so the idea
with Git is that each change of your code a git commit can be captured and tracked through the history of your
project a git tree so I'm going to get my pen tool out here for just a second and so I just want to make this very
clear so we have over here a file and a get commit can or a commit can be made up of multiple files with multiple
changes in them and then they're represented over with a a a a message okay so here this is a single um git
commit and it can have multiple files and uh files and changes in that single one and then that's your tree okay so
hopefully that is clear if it's not don't worry we'll get Hands-On skills and we'll definitely be able to remember
them later I want to take a look at a bunch of common get terms um and it doesn't matter if you remember these now
but you will know what they are hopefully by the end of this course um and so there is this nice graphic here
that is provided by Wikipedia that gives an idea of how all of these terms uh work together um but let's go quickly
through them and see what we can make sense of so the first term is a repository this represents the logical
container holding the codebase in fact you you could interchange the word codebase repository and mostly mean the
same thing we have a Commit This represents a change of data in the local repository and so um that's pretty clear
then we have the tree this represents the entire history of a of a repo so when you see tree just think of that
graph we have remote uh this is a version of your project hosted else Ware used for exchanging commits uh some
people might be a bit uh picky about this because they might say remote is actually a remote reference to
repository so it's pointing it's a pointer but I'm just going to make it think that it's a remote uh repo it's
just somewhere else and there uh there are branches so these are Divergent path to development allowing isolated changes
you're absolutely going to know what branches are you're absolutely going to have to work with them quite a bit um
there is a branch known as main it was formerly known as Master uh the word was changed because it was not a popular
term anymore and so now main is the new name uh and this is usually the default Branch or the base
Branch uh if that makes sense there too so we have clone this creates a complete local copy of a repository including its
history so this will create like a little dogit folder um so it's not just the contents of the files but some
configuration around the git repo we have checkout so this switches between different branches or commits in your
repo we have poll so this downloads changes from a remote repository and merges them into your branch we have
push this uploads your local repository changes to a remote repository we have fetch this downloads data from a remote
repo without integrating it into your work um we have reset so undoes local changes without options to unstage
revert commits we have merge this combines multiple commit histories into one we have staging files this prepares
and organizes uh changes for commits it's not a command but like it's just where you would work with your files um
in the example here I'm just going to get my pen tool out again it's kind of over here it has to relate with this up
here as well and So within staging files we're going to have commits which we already talked about prior and then
there's that add command so adding things that will get committed so hopefully that makes sense and we'll see
taking a look at version control services and if you're thinking that we already covered this it looks that way
but the other one was Version Control Systems this one is version control services and yes they have the same
initialism which is confusing but it's very important to make that distinction because those are two separate things so
version control services are fully managed cloud services that host your version controlled repositories these
Services often have additional functionality going Beyond just being a remote host for your repos get is the
most popular and often the only choice for a VCS and we often call these git only uh providers get providers um I
need to also point out that some people call version control services Version Control Systems and vice versa and it
just gets really confusing so I did my best to make that clear distinction between the two okay let's take a look
at some vcs's so the first here is GitHub and it's owned by Microsoft it's the most popular VCS uh due to offering
uh due to its ease of use offering and being around the longest at least forget um and they've always been very
developer focused and super friendly uh GitHub is primarily where open source projects are hosted and offer Rich
functionalities such as issue tracking automation pipelines and a host of other features I remember the day GitHub came
out and I signed up for it because I was so done with using subversion then came along gitlab so gitlab was an emerging
competitor to GitHub and at the time had unique features such as cicd Pipeline and improved security measures this is
no longer the case as GitHub is now on par with gitlab um but yeah at one point a lot of people were looking at gitlab
then there's bit bucket this one is owned by at laian you might have heard of laian before because they are uh the
same company that makes jira and jira is the most commonly used project manager uh for um people in Tech so you know
even though GitHub is really great for developers a lot of companies still use bitbucket and the interesting thing
about bitbucket was that they originally hosted Mercurial so remember I said back in 2005 mercal and get came out well
alatan adopted mercal GitHub adopted um git and git one and GitHub one and so what's really interesting is that
bitbucket then eventually added git and then sunsetted Mercurial so everything basically is get now there is another
provider called sourceforge they're one of the oldest places to host your source code they existed before GitHub um and
they were the first uh to provide free of charge um uh get repository hosting to open- Source projects um the only
thing about Source Forge is that they never really dominated because they just had so many ads and bad practices and so
it just didn't work out for them they are still around and a lot of Open Source projects like to only host there
they might mirror make a copy to other providers like GitHub um but for the most part everybody's on GitHub um but
a look at GitHub and this is a version controlled service that initially offered hosted manage remote get repos
and is expanded to provide other offerings around hosted code bases if you go look up what GitHub calls
themselves today they call themselves like an AI uh developer powered platform um it's really bizarre because they are
basically a host for uh for get repos with extra stuff on top of it but I guess since AI is so popular they got to
try right but let's take a look at all the functionality that they have so we have get repository hosting that is
their main bread and butter we have project management tools issue tracking PLL requests and code reviews GitHub
pages and wikis GitHub actions GitHub co-pilot GitHub code spaces GitHub Marketplace GitHub gists GitHub
discussions collaboration features for organization organizations and teams API access development so GitHub development
uh they have a GitHub CI they have sdks um we have security features like autod detecting credentials in repos
they have education specific things or course automation like GitHub classroom and I'm sure they have more uh we're
going to learn about all of these things because this is what the GitHub course is about to understand the full offering
of GitHub and to make best use of it and just a fun fact is that GitHub was originally built in Ruby on Rails Ruby
is my favorite language rails is my favorite framework so I've been uh uh on the the ride or the train since day one
okay hey this is angrew brown and in this fall along I want to show you how to create your own GitHub account every
single developer on the planet should have a GitHub account because it's a great way to Showcase your work uh we'll
talk about that later but you can see that I'm already logged in here so I already have a GitHub account and what
I'm going to do is log out and I'm going to create a new one from scratch so here we can see uh we can have multiple ones
um I'm going to just sign out of all of my accounts here and let's go ahead and create ourselves a new one so I'm not
sure remember I told earli I told you they're like the leading AI power developer platform which is such a silly
term but um let's go ahead and see if we can make a new one so just in case it's the future and they've changed this
homepage I'm going to go up to sign up and I'm going to see if I can make an account if I can find an email that has
not been used so far so I'm going to type Andre exam pro. if you're using Gmail so I can't use that one if you're
using Gmail you can use like plus signs to um create multiple ones so like my really really personal email don't email
me because I don't ever check this one is like omen gmail.com you'll learn that my username on GitHub is Omen King why
it is that I don't want to talk about it it's like forever ago I made this account like so long ago and I really
wish I could have got Andrew Brown as my username but that's not what it is and so I'm just going to go here and say alt
okay so this is a trick with Gmail that you can do you can put a plus alt on it or uh maybe a minus I'm not sure but
let's go ahead and see if that works and I need to create a password so I like to generate really strong passwords
um you can use whatever you want I like to use hold on here I like to use this site which is
the uh password generator plus password generator.net um I should make a disclaimer if this isn't insecure don't
use it blah blah blah but I'm pretty sure it's fine so I'll ually go like 24 and get a nice long password um and I'll
just generate a few different ones off screen here and I'm going to enter that in okay so just generate out a few and
I'm going to drop this in on here okay we're going to hit continue says it's strong that's good and then I got to
choose my username so I probably can't get Andrew Brown and um so I need like another name
I'm gonna try Dono not available that's like my game uh gamer uh tag on um steam so what's
another one that I could have uh we'll just say Andrew Cloud can I get that one so hard we'll just say Andrew WC Brown
can I get that there we go WC is my M middle initials it doesn't stand for water closet okay I know it looks like
that we'll go ahead and hit continue I hit continue again and so now what I need to do is do this verification
please solve this puzzle so we know you're a real person uh verify okay use the arrows to
rotate the object to the face in the direction of the hand okay use the arrows so I think I
account and so now I need to open up that email just give me a moment okay all right so I've been waiting a few
minutes and I haven't seen anything and I resent the code so maybe it doesn't like emails that have that plus in there
it's totally possible so I might actually have to go ahead and create a new email which is quite the headache as
I ran out of emails here unless I can think of another one you know what I think I have another idea I'm going to
use uh a different one here so it looks like unverified can I change this I'm going to try um because you have an a
privacy a privacy email uh will be used for account related stuff yeah I'm not really sure what's
exist so what I'm going to do is I'm just going to add another one I can do Andrew maybe at teacher seat.com we'll
out here okay let me just think about this for a second all right all right another thing I'm going to try is I'm
going to try uh maybe Accounts at teer seat.com that's another one that I might be able to use
and that should be my primary now right it really wants to send it to this one maybe what I can do is I can
click oh looks like we have both okay so we have this one um please verify that I'm going to get rid of this one here
because that one's not working I'm going to try Accounts at teacher city.com so that's in my Outlook so I'll go take a
look there and see if I get it all right so this is working out totally fine so over here we have um the confirmation
email so I can go ahead and just verify that email we can also just grab this link I kind of prefer using the link
here um because that just gives me kind of a guarantee and we'll go here and now we are in so this is exciting um it's
been a long time since I've made a new GitHub account so I'm not sure exactly what to expect but looks like we have
some places we can start stting a new project collaborate with your team learn how to use GitHub hey I'm already doing
that you don't need to do that GitHub let's go ahead and skip this for now and we should get back to our main dashboard
so we are now in and we have an account that we can use um so yeah that's all I wanted to show you in this video uh but
fall along I just want to show you that I'm going to be setting up multiple accounts to quickly switch between them
um if you want to take full advantage of learning how to use GitHub you're going to need probably another account because
you're going to have to have somebody else to work with so I already have my primary account I already showed you how
to make an account so what I want you to do is make a secondary account I know it's a pain but go ahead and do that but
what I'm going to do in this video is show you how you can log into both and switch between the two and I'm also
going to set up a repository um that I'm going to put code examples in if we happen to put any in
there so what I'm going to do is go up here at the top right corner I'm going to go add account and so this is going
to allow me to log into my other account I can put in my username or my email so this one is um this is monsterbox
pro.com that's my old my old company that's not been around for a long time but I've never updated it and the exam
Pro email is on a another GitHub account but I'm going to go ahead and find the password so I can go log into this one
so just give me a moment I'm just trying to find it here it is there is the password I'm going to paste it in here
okay we'll hit sign in okay notice it says GitHub mobile so now it's my opportunity to show you
GitHub mobile so what I'm doing is I'm opening up my phone and right away it pops up it says uh you know new signin
request now it says reject or approve I'm going to hit approve it says authentication request was approved uh
the first time I I had to do that I had to enter in like a code like two numbers but now from then on I just have to do
that and it's very easy to uh to do that so if I want to switch between accounts we can go here and just switch between
them uh freely so that is really easy and so what I want to do is go to exam Pro this is my other organization you'll
learn about organizations in this um this here that didn't really help but what I want to do is create a new repo I
can create a new one up here in green or I can go up here I never notice these up here but this is another place to do
that anyway you're just watching you're not doing right now okay so I'm going to go ahead and hit new and in here I'm
GitHub we'll say a repo containing GitHub examples for or Pro for programmatic examples for
programmatic examples this will be public because I want you to have access to it I want to have a read me I'm going
to create that repository and so this will crop up later in the course and you'll have to know where this this is
but just remember that is but the key part of this video was how to log into another account um and be able to switch
ciao all right so what I want to do in this fall along is set up a GitHub organization and the reason we want to
do this is so that uh it's going to make it easier when we get to that section so what I'm going to do is go to the top
right corner and I want to I can make this in either or account I'm going to make it in the alternate account cuz I
already have a enough organizations in my main GitHub account and what I want to do is go over here to um maybe
organizations and here it says you are not a member of an organization we could turn this account into an organization I
don't want to do that or we can make a new org so I'm going to make a new org and notice right away it's going to hit
us with some pricing so um you know teams gives you the uh full functionality we just want to have the
free one uh which might have some limitations but um it should get us started if there's are things that we
can't do um then I'll switch over to our paid one that I have in my account for the most part we should be able to do um
pretty much everything as long as we're using a public GitHub um public GitHub repo so I'm going to go ahead and hit
seat.com and this organization belongs to a personal account we could say uh business or institution but then we get
a little bit more details there so I'm going to just stick it to normal and go to personal account then down below we
need to solve our puzzle so we've seen this one before we'll rotate that out and we'll
it but that's what I'm calling mine you can put some numbers on the end here if that makes it easier you do like four
five six seven or something because these are going to be unique names just like your your username going to go
ahead and hit next and so now it says add organization members so we can go ahead and add some people so what I want
to do is I want to add um Omen king so I'm going to go ahead and do that please don't add me all right add your own
other account your two accounts okay we'll hit complete setup and so now this person has been invited
I'm not sure if they're instantly added um not yet but I believe that I was yeah here's the invitation so we'll go here
so the invite has been sent and we'll have to go look for that so I'm going to go over here and switch to my other
account okay and so now what I'll do is I'll click up here and maybe it's up here my
you'll learn about GitHub which is like invites are a pain and you always have to really figure it out so I'm just
trying to think about where that or that could be um what we could try to do is type in the organization name which I
organization GitHub Cloud journe so I'm just going to copy this URL I want the um view organization I
want to go to this page actually I'm going to copy it here I'm go switch back and I'll enter this
in and so now notice that it's showing me where the invitation is so it says Andrew Brown invited you to join this
organization view invitations okay and I I don't know if there is but I'm just going to double
check here because a lot of times GitHub will have like invitations invitations they might have a page for it they don't
GitHub if you're watching make a slin ation page so we can easily find them across them here so as you've been
invited to GitHub Cloud jors ask for GitHub co-pilot seat optional I guess this is kind of an upsell they're like
hey do you want to be able to use GitHub co-pilot but I'm going go ahead and join this organization and now I'm in there
so there's two people in here again we'll come back to this later I'm the member this is the
owner um and uh you know if your company is using GitHub they're likely going to be using using organization so it's good
ciao all right let's make sure we understand clearly the difference between git and GitHub I don't think
it's that confusing but it is in the study guide or exam outline so I'm just trying to make content that they want us
to know um so let's do a comparison and go through some things to make sure we understand the difference so git is a
distributed Version Control System a dbcs and GitHub is a version control as a service I called it a Version Control
service it can also be called a get provider or you can call it a version control as a service I'm just trying to
get you exposure to all those different terms you could call what GitHub is for functionality for Git it manages source
code history for GitHub it provides cloud storage for git repos of course it does more than that but that's its main
functionality um for Access you're doing this via your local system installation or it's basically wherever it's
installed the point is is that you're working on it on a machine on a server or some kind of compute uh and GitHub is
access through a web interface because it is a cloud service for the scope we're talking about local repository
management and then for GitHub we're actually talking about the online collaboration and remote hosting uh so
anything that has to do with remote or things around the repo the git repo itself for collaboration it's for local
changes requires manual sharing for GitHub it has integrated tools for collaboration ation like issues and PR
and a lot of other features for usage you're going to be using this primarily by the command line interface there's
definitely software out there that makes it a lot easier to use um we'll get into that but generally it's a command line
tool that you're using for GitHub it has a graphical interface and it also has its own CLI tool but most people are
so a git repo or repository is your git repo think of your local one that you push Upstream to GitHub to be uh hosted
remotely and GitHub allows you access to manage your repo uh with several functionalities so here is a screenshot
of a GitHub repo this is when I ran a boot camp in 2023 uh and so let's talk about what is
on this GitHub repo page I don't know what they want to call this page I just called the GitHub repo page for a
specific repo um but you can view different branches view tags view commit history explore repo files view releases
uh see codebase language breakdown view top level markdown files and so those top level files might be the readme the
license uh security other things like that you can perform actions from this page or quickly to it such as pinning
watching forking starring cloning so so uh a lot of stuff going on on this page and this is going to be we're going to
be spending a lot of time or going from here to somewhere else but that is a GitHub repo so there you
so that you have a a general Fami familiarity uh so that when we start diving in a bit deeper we understand
um and so I can go here and find a repository I can search stuff and say like if I'm looking for my boot camp I
can type in boot camp here and then make my way over here and find it um a lot of times when you are looking for stuff
you're going to use the global search so up here you could do that as well and I could find my own repos or other repos
um there are a lot of Open Source repos on GitHub and so um I know rails pretty well so I'll go ahead and type in rails
and so we have this rails repo I'm going to go open this open this up and take a look and see what we can see because
this is a very mature Li um uh GitHub repo and we'll make it very clear of all the functionality that's happening so
notice we have our main area this is where all of our files are uh we can actually view any of these files so I
can click into any of them so I can go into the gem file and it will show me the contents of the gem file what I like
about uh GitHub is that when you click into here then you kind of get this file explorer and it's extremely powerful uh
if I click this on the right hand side it will show me symbols that's not a very good example I might open up a ruby
file to show you what I mean so I'm just looking for a ruby file here and so this will show you like places you could jump
uh jump towards in your code um that's really nice you can see who did what by going to blame so we can see exactly
what somebody's doing by the way I'm going really quick here it's not important for you to remember any of
this I'm just giving you a tour of stuff so just relax and uh enjoy the information that we're learning here you
don't have to write it down if I wanted to find a file really quickly in this repo I go here and type something in so
maybe I'm looking for um something like View and so it's going to drop down and it has this fuzzy search if I want to
find a ruby file I could type this in we have a lot of them here I believe there's a hotkey here so if I hit T if
I'm over here and I hit T it will bring me over there I can switch branches really easily um does that let me add a
file it's not my repo but if I add a file I'd have to create a fork um the search brings that up there but we'll go
back over to code Okay so my point here is that you have all the files here and you can browse them um you can switch
branches you can go and take a look at all of your branches you can take a look at all your tags you can star you can
Fork you can watch here for code you could launch this up in code spaces which is a cloud developer environment I
normally have um G pod install so if I hit refresh that button might show up here this button you will not have this
button I installed this because it's like code spaces but different um on the the right hand side
you get a bunch of information about the repo like stars watching Fork uh some of these probably are conditional cuz I
don't remember seeing these on myos you have releases so maybe you are building up binaries like downloadable files that
people can uh utilize so some people they will host all their code here and they'll build the binaries for quick
downloads so that's somewhere else you can go to packages is probably similar to releases I don't I don't think I've
ever used packages ever in my life uh let's go over here and take a look browse browsing all packages yeah I
don't know what packages do we can see who's using it the contributors so people that are writing uh writing code
um we have the languages so you can see this is mostly a ruby LI or um repo which makes sense
because it's Ruby on Rails down below we get a preview of our readme file so that is in the top level directory they have
a codes of conduct I imagine that is a markdown file here and as well the license file I'm sure that's in here as
well the security policy as well um so yeah there's a lot going on in here and then there's all this stuff up here
these are features on top of your repo so lots and lots and lots of stuff if we want to see our commits we can go
here and click on commits and this is kind of like a tree it doesn't give you the full view because when you are
looking at um when you're looking at a tree uh there's branches and stuff so we're missing that information here but
the idea is you can go here and uh go to different branches and look at those commits and that's basically all I
wanted to do here we'll get into it a lot deeper later on but that is your tour of GitHub repos
commits so we're going to take a look at a series of git components uh in this course um it's not really a focus on git
skills but more so GitHub but I want to make sure you still have some git skills so after we go through uh some of these
things I'll then do like a really quick and dirty uh git uh command crash course okay so a git commit represents
incremental changes to a codebase represented with a git tree so a graph at a specific time so here we can see
that git tree um this is in GitHub so it's not the full representation when we go into um vs code we will we can see a
better representation of it there are external tools for that but for the most part the idea is that you have um uh you
have get commits that are over a historical period of time then we have our uh git commit here is one within uh
GitHub that I clicked through and I looked at okay and from here a get commit contains additional modifications
deletions of files additions and deletions of file contents and it's not the whole file themselves and this is a
strategy to make the uh commits efficient because if you if you were to store full copies and every single
commit your repo will get really big really fast I'm going to repeat that again and we'll talk about the contents
of commit fils here just shortly uh each commit has a Shaw hash that acts as an ID so it looks something like this we
can use this uh to check out very specific commits which is very useful I want to repeat get does not store the
whole files in each commit but rather the state of changes this greatly reduces the file size so uh so for
developers to the developer you it will look like a whole file but really when you store it in in uh your G tree it's
not going to be like that um so what are the components of a get commit we talked a little bit about that but we have that
commit hash that is a unique Shaw one hash identifier for the commit I don't know if it's really Shaw one because I'm
not really familiar with all the different a Shaw but it's Shaw something uh we have the author information so we
have name and email often you have to configure git to say what email you're utilizing the name so that's attached to
the commit message U or the commit itself you have the commit message is is the description of uh the the Commit
This is we're going to be spending a lot of time when you are making commits because you want to write good commit
messages you have a timestamp so this is the date and time when the commit was made um you have the parent commit hash
this is a Shaw and hash of the commit this commit is based on I don't really understand that because I've never had
to bother with the parent one but it is something that's in there and the snapshot of the content a snapshot of
the project at the time of the commit not the actual files but references to them and the changes that are occurring
so if you had vs code open and we were taking a look at a um at uh changes that we have staged so they haven't been
committed yet but you can see here we have uh the commit message it says remove old comments we have files
changed that we plan on putting in the commit and you can see some deletions there on the right hand side we can have
additions so hopefully it's very clear what a uh a git commit is you are going to need to know these basic commands and
we will get a little bit practice again quick and dirty this is not a full-blown git course but it'll be enough to get
you buy so that you can do the GitHub stuff so we have a bunch of stuff there like get add get remove get Comm message
with the The Hyphen m flag to add a message hyphen a to automatically stage all track changes if you have a commit
and you haven't pushed it up uh to your uh your remote repo you can amend it um you can create empty commits uh you can
specify the author if you need to you can check out a very specific commit but yeah that is get commits in a nutshell
taking taking a look at get Branch so get branch is a Divergence of the state of the repo there might be better uh
descriptions than that but that's the way I think of it you can think of branches as being copies of a point in
time that have been modified to be different and so what I want to do is Step you through what it would look like
working with get branches and this is going to be a little bit messy and it doesn't matter if you can remember or
make sense of all this because it will make more sense when we start working with it but do your best to follow along
here so imagine we have a git repo in that git repo we have a main branch and basically all git repos have a main
branch and that's pretty much the standard name for them now and we're going to also have a production Branch
the main branch is where we're going to have code uh that features and bugs will be um rolled up into and then when we're
ready to push it out for production it will go to the production branch and some cicd tool will push it out and
automatically deploy it so um let's imagine we already have a commit in the main branch uh maybe there are previous
versions in the production Branch we're not going to worry about it so uh you have developer a developer a needs to to
work on a very specific feature they're going to open up a feature branch and in there they're going to put in some
commits and they're working along uh meanwhile in the company somebody already has pushed some stuff into the
main branch it's not ready to go in production but that commit is now out so what's important to note here is that
feature Branch one is not aware of that new commit because things are happening asynchronous a in async manner in
different branches and this is the challenge with get is that you have to deal with all this async stuff and make
sure you bring those changes into yours deal with conflicts things like that now let's say we have developer B and
developer B is working on feature Branch 2 and when they started on their on their um uh their feature they decided
to branch from this point in time okay and so they start working on it and they get their feature uh done and um they
get they talk to their um uh the director of engineering and and they make a poll request the poll request
gets accepted it gets merged back into main okay and so developer a who's working on uh
feature Branch one has all these changes that they still don't have so let's remember that they're going to have to
deal with that at some point but anyway that feature got merged into Main and it looks like it's ready to go in
production so it gets merged into production and so this particular commit contains get my pen out here contains
all of this information right and it's all packed into here if that makes sense okay I'm just going to erase that here
and so that gets P production and it gets tagged a lot of cicd systems will trigger when a tag is applied that's
definitely how I do it but now coming back to developer a they're on feature Branch one and they have all the stuff
um that they need to get their Branch up to date so what they'll do is they'll merge back in in their Direction I know
it doesn't show a merge but they'll merge that information into feature Branch one so they are now up to date
and they have now finished their feature by doing a bit extra work and so they've merged back into into the main branch
and now their stuff is to be rolled out to production so it gets merged into production and then that gets Tagged so
hopefully that gives you kind of an idea of uh this kind of workflow this should have really been like that uh and this
actually has a very particular name it's called the GitHub flow now there are some variations of this so that's why I
say very close to it because in case I'm wrong I want to have that buffer to say that well I didn't say this is exactly
the GitHub workflow but this more or less is the GitHub workflow where you are creating branches feature branches
merging it back into some other branch and then you have a branch for production uh you can have branches for
all sorts of things you can have specific environment Branch branches like staging development production you
can have specific branches to developers so like based on their names you could have branches per features branches per
bugs it's going to be based on what your team wants to do all right uh there are definitely get Branch commands you
should absolutely know and we will do again a quick quick and dirty crash course so you are familiar with it this
is a extremely common pattern that you're going to find that you'll be doing which is is you'll be creating a
new Branch for a feature you can be adding changes you're going to be pushing it Upstream we might do this via
via's code using the git CLI we might be doing this using GitHub uh creating a branch from an issue but we'll
definitely be doing this because this is something that happens a lot um in professional um uh teams is that they're
creating feature branches so hopefully that makes sense and again if it doesn't wait till we go ahead and do it and then
taking a look at get repos so these represent the reference to a remote location where a copy of your repo is
hosted so when I say remote get remote I'm saying remote reference or remote ref you might see those terms uh used
all over the place uh you can have multiple remote entries uh or remote references for your git repo and the
most common one you're going to see is called origin it's almost always uh there uh everybody seems to use it it
indicates the central or golden repo everyone is working from and represents the source of Truth uh the remote
entries or references are stored in your dogit config we don't really talk about this dogit folder in any of the slides
but the dogit folder is how you know that your project is a uh has a a git repo in it because it needs that folder
uh to initialize a get repo we'll look at that in the quick and dirty crash course um so in here in the config file
you can see we have remote defined so this format of a file is called a toml file um so anytime you see those Square
braces and then definitions that's usually a toml file but I'm just going to get my pen tool out here the idea
here is we're saying we uh we have a remote named origin and the URL is pointing to our GitHub repo this part
says how it should fetch I'm not going to get into that right now and then down below we can see we have some branches
that we are tracking and they're pointing to remote origin and then they uh they're saying that we want to U
merge um uh there so hopefully that is clear notice remote names can be referenced so we have origin up here and
it's referencing this up here okay so I'm just going to clear my annotations here so this is a little bit more clear
here and there are a bunch of get remote commands you should know I don't remember the most like the git remote ad
I don't ever remember that one because often when you clone it's going to add them anyway and so usually you pull them
from GitHub but you should know push you should know pull you should know fetch uh and when you are creating branches
you should know um how to uh push upstream and we'll talk about upstream and downstream next okay
[Music] all right let's talk about the concepts of upstream and downstream so imagine we
have GitHub who is hosting our remote repository and then we have our local developer environment or Cloud developer
environment uh this is local basically where we are doing our work and so they both have a main repo because we know
that git is decentralized so we can have repos in more than one place we might already have some commits on the remote
side but what's going to glue these two together is going to be the remote and so we set up a remote tracking Branch
okay and and you'll see that term when you create uh and you push branches up because the idea is that it's tracking
uh the origin is pointing to main so this is the way we track them uh we saw that was stored in theg getconfig so
when we go ahead and we perform a poll from our local developer environment we call this Downstream we're pulling
Downstream so a repo that pulls or clones from another repo and just understand this is relative to the
direction or the perspective um of who's pulling so if the remote was pulling it would be Downstream it's it's anytime
you're pulling it's always Downstream and now imagine we have commits we've been working locally we want to push
those up to remote we would call that Upstream so this is when you are pushing changes so that's Upstream that's
a lightweight workflow for multiple developers working on a single repo there's a lot of variations on this uh
so this is not a technically perfect description of it but there really isn't one and I want to show you a really old
graphic um I don't know how old this is but I've seen this and older ones maybe all the way back to 2008 at least and I
remember before GitHub flow because GitHub came or sorry G came out in 2005 and it took a few years to gain adoption
so for a long period for a few years we just had a mess of stuff and then somebody came up with this maybe it was
GitHub I'm not sure but uh it's called the GitHub flow and um here we have a bunch of branches and it looks very
similar to the one that I showed you in get branch it's a little bit different and so the big difference is that well
first of all uh we don't call it Master anymore we call it main but my main was the develop branch and then I would call
this Branch master I would have called it production because I think that makes more sense and a lot of people do that
these days but when it first came out this is how we were doing it okay so understand this variation but the idea
is that you have this one branch I call it main this is develop and it holds things that um that uh that hold feature
br branches or hot fixes and everything it's basically like rolls everything up but it's not in production yet and so
the idea is that when these things are ready for um production you can push them out into a release Branch a release
Branch could be also called staging that's normally what we call uh today and this could be where it would roll
out so once you push stuff here this could go out and execute a cicd pipeline and would set up a staging environment
so that QA could be done on it or kind of load balancing could be done on it or stress testing um there and the idea is
that as developers you would open up feature branches off of develop and as you complete them they would come back
in here uh and they would get merged in and then when things were really ready you take it take it from uh release
branches and push it out to your production branch which they're calling uh here uh Master if for whatever reason
you had a serious problem you had to fix really quickly you could uh uh create a branch off a master into the
hot fixes and then merge it back in skipping all the stuff down below uh you know again I wouldn't do it this way
anymore I would be surprised if companies are sticking uh to this method but this is the original way I just
wanted to show you that there is variation um and you know a lot of people do skip having a release branch
and they'll just deploy often and into production so it's going to be really dependent on your team so here we'll
just kind of Lo Loosely described GitHub flow you create a branch for each new task or feature create a new Branch off
the main branch add commits make changes and commits commit them to your branch open a poll request start a discussion
about your commits reviewing code in the poll request discuss a review share your poll request with teammates for feedback
deploy test your changes in production environment um so yeah oh and the last thing would be merge so once your
changes are verified merge them into the main Ranch so that's the general concepts of it and hopefully that makes
taking a look at the GitHub CLI so this is a command line interface to interact with your GitHub account you can quickly
perform common GitHub actions without leaving your developer environment and uh if you uh went through the quick and
dirty um get uh crash course then you definitely got some exposure to GitHub CLI but the idea is that you'd have to
log in you can perform AC there so we have something like creating a repo creating an issue uh reviewing a PR
there's a lot of uh CLI commands the good up CLI can be installed on Windows Linux or Mac OS um that's an example of
using Brew to install if you have it for um uh Dev containers you can specify it as a feature to get installed so that is
a very easy and quick way to have it uh installed there and just to give an idea of commands we have our core commands we
have a lot of additional commands and then we have one specific to GitHub uh actions command so we're definitely
go all right so in this follow along I want to take a look at the GitHub CLI and see if we can do a few different
things with it what I'm going to do is switch over to my other account um this is just my uh playaround account for for
GitHub and we should already have a repo in here called GitHub examples and what I want to do is I want to um you know
just do some things in the CLI so I need some kind of environment to work in and so what we'll do is launch up a code
space uh we do have this older one um I think what I'll do is make a new one I don't think it really matters if
we make an old or new one that other one is stopped so we'll open this one up and we'll see if the GitHub CLI is already
pre-installed it could be pre-installed on this because I would think that if well if I was GitHub I would have it
pre-installed so people start using my product right away but if it's not we'll definitely go ahead and take a look at
how to install it we did install it manually locally um in the uh git crash course the quick and dirty crash course
so uh you know if we don't have to show the install I'd rather skip that but we'll wait for this to spin up okay I
have no idea why but that took a little bit time to spin up um you know our goal isn't to really make anything just to
play around with the um the CLI here the Cod space is currently running in recovery mode due to configuration error
I didn't do anything I I don't care it's um I mean it's a new environment why should it be
recovering is there something it can't do yeah I'm not sure about this so what I'm going to do because I don't trust
can do with it but it didn't pick up my settings when I told it to save it earlier so I don't know if it's because
it's in recovery mode or not but for the time being I'm going to go ahead and change the theme so I'm not upset there
is if we had this installed can we install through here sometimes you can install stuff through plugins
right and so I'm just curious if we could do that there no okay so that's fine we'll go ahead and install this and
I definitely know that Brew is installed or maybe it's not installed on this let's find out it is installed on git
pod but I'm not sure about GitHub it's not okay so what we'll do is we'll look up the GitHub CLI and we'll go ahead and
install it we already did this before but we'll do it again and what we're looking for is those install
instructions and we want it for um Linux okay so we'll go over into here into Linux and we'll grab this on line
command it looks like a lot but it's really just doing this update and this install like down here but it has to add
the repo so it knows where to install it from but I'm gon to grab that big thing there and we're going to go ahead and
paste that on in say allow and hit enter and um it failed it failed and I can't look at I can't see what I'm doing so
I'm gonna bump up this font I do not like GitHub code spaces I'm so sorry I really like gpod and we're going to go
here and say terminal font I'll try not to complain to much about it in the course but uh I can't
promise anything I'm going to copy that again and we'll try this again it an enter no such file directory okay
okay maybe because gpg gpg isn't installed sometimes you don't have to do a gpg check but we might have to with
if that's not working maybe what we could do in an easier way is we could just add it um to the
uh uh to the that this file this code spaces file so we can just make a Dev container um and maybe that would be
less of an issue so what I'll do is I want to add a Dev container um we'll go up to here Dev container
here so grab this maybe we do want to commit this maybe this is a good idea um and so I'll put this
here and I need a comma there and so the idea here is that this should install the CLI into this environment and maybe
the reason why this thing messed up was because it wasn't specifying any base image and that's why I got upset because
when we did this in the tutorial earlier um it kind of complained or sorry like when we spun this up it complained but
we didn't specify a base image so I'm going to look up what the base imag is for Dev
container the base image for GitHub is and maybe that will help us if that doesn't help let's go let's
spaces let's see if that can figure it out it's probably just going to go to the um the documentation but um I don't
remember where it is so if you don't specify one it will create in the default one okay that's
what I would rather it do so I'm going to leave it alone and what I want to do is rebuild uh this environment because
we've added this change before we do I need to commit this maybe so I'll just say um it should install the GitHub CLI
let's see if we can find the rebuild I'm clicking up here rebuild that's no good we can open up the command pallet this
way command pallet has all the commands for for vs code and there should probably be something for codes space so
if I type in code spaces there's probably something here to rebuild there it is rebuild container so I'm going to
go ahead and rebuild that container I'm going to go ahead and hit rebuild and so I'm hoping that we'll install the CLI
and that avoids us having to install gpg or whatever it wants because that's a headache and I don't want to deal with
that so see you back here when this finishes building okay all right we're in it says the code space is currently
running in recovery mode due to a configuration please review the creation logs update your Dev container as needed
or rebuild the container so clearly doesn't like something about my file I'm not really sure because there's not a
whole lot in it um so we have command control shift p okay control shift p hey like learing learning right and uh we
want to view creation blogs oh that's just to open up the uh command pallet view creation log
and somewhere here it's messing up now I don't understand how it could be messing up because we literally have next to
easy GitHub CLI tutorial failed to C the container an error occurred unified containers error
creating fail doesn't even tell us why it's just like nope it doesn't work maybe it's up here
um no matching entries and P unable to find user Dev container okay maybe it's this okay let's take this out all right
and uh that's probably the reason why okay so we'll save that okay we'll have to
to rebuild this again I'll open up the command pallet down here below just in case and I want
let's do a full rebuild that's fine let's rebuild and we'll get it going this next one okay
I'll see you back in a moment okay so I think that resolved the issue it took a while for that to
rebuild but what I want to see if the C is in here so I'm going to type in GH and there it is okay great so that's the
easy way to get in there as long as you don't make any mistakes in your Dev container uh Json or JS or Json Json
file and so let's go ahead and see what kind of actions we can perform so what I'll do is go over and type in GitHub
CLI and we'll take a look at the documentation and see that's not what I want I want the official documentation
so we can see what kind of commands we can get here it is okay so here's all of our Comm comms and the
question first is are we logged in because that's something we might need to do so let's go ahead and type in GH
login oh sorry JH off login to log the CLI and we have two options we'll do github.com it says the value of the
GitHub token environment is being used for authentication to uh okay so it sounds like we already are authenticated
because of um GitHub code spaces and it's loading some kind of temporary um token in here something we could check
here is to see if there actually is a token being set so I'm going to type in EnV grep and we'll type in
GH and I don't see anything there but apparently it seems to think that it is ready to utilize so maybe what we could
do is list out repos so I imagine if we go here there's probably something like list there is and we could try to list
our repos let's go ahead and try that GH repo list and we have a repo that's that's
actually a really nice display what else can we take a look at here um so again back over to the
View and I was hoping that it would set it would select the current one but apparently it didn't but we can do uh GH
repo set default and I got to type it right maybe it'll let us choose the repo so we don't have
to goof around and so there it is we can choose one and apparently uh we have two exam Pro Co and the Andrew WC Brown
curious that it's showing forked ones but this is the one I want because this is the count I'm in right now and so
let's go ahead and type in GH repo View and so now we can see some stuff in here um not a lot of information it
looks like it's kind of the information about maybe the description or stuff like that so I
was hoping to see a little bit more there but that's okay let's go down here and see what else we can
off that's not that interesting something we might want to do is maybe we might want to uh maybe change our
labels let's go ahead and try that out so I'm going to go here and say GH labels list
um is it just label or labels label oh cool yeah we could see our codes can we add a new one in
like we've created another label let's go ahead and hit up okay so that looks really good there bug isn't working good
um if we wanted to create a new repo sometimes what they'll do is they'll um they'll create like a uh a wizard in
we'll just we'll just do this I bet it will prompt us oh I got to type it right I seem to
here I'll just make this one private I could add a readme file we could add a Gore we could say what we want the get
2.0 um we want to do this we'll say yes and it seems like it should have created but we get a resource not accessible by
integration for 403 so I'm not 100% sure as to why we're getting that it could be a permissions issue because everything
is based on um that token right and so maybe we have a personal access token that allows us to read but not right um
and we got a 403 right so 403 error is for forbidden meaning we don't have access to do it so what I'm thinking is
that we can probably create our own um personal access token and try to get around that so I'm going to go to
settings here I'm going to go all the way down develop for settings and we're going to go into personal access tokens
find grain tokens and we'll generate a new token again I don't know if this will work but I'm just going to try it
I'm going to set this for tomorrow because I don't need it forever and if it's for all repos then that's a
public repos read only so the other question is is it because we tried to make it a private
repo but then why would it have it as an option if we can do that I mean we could also authenticate uh via SSH so I'm not
sure so I guess that's where I'm not 100% sure so maybe we'll just say all repos
repo uh let's see here so what does this one do interaction limits uh interaction limits
out what functionality we would have to provide managing repository environments it could be this read
um oh repo creation it's up here so I don't think we need these other ones I'm going to leave them on because they're
not that big of a deal um and this one says it's mandatory so sure we'll have that in there got it
added in there I'm going to go ahead and generate this token and so now we have this token I'm going to copy it I'm
going to go over to here I'm going to type in export GH token and I'm going to go ahead and paste this in and so the
idea is that when we use the um uh the CLI it should pick up this as opposed to what what else is on here so I'm typing
EnV hyphen grep because I want to make sure that this is actually set as our environment variable if you have other
tabs open here it might not show up so just stay on the current tab we'll go ahead and try this again another reason
why it might have not worked is because that name was already taken but I don't think so because it's scope based on our
user so we'll go ahead and try this again from scratch so we say GitHub CLI example and then we'll say uh we'll put
nothing in there we'll make it private and it was interesting there's a visib of internal I imagine that maybe that's
for Enterprises we'll say yes yes and it doesn't matter I'll just choose that one we'll say yes doesn't matter I'll choose
that one and we'll say yes let's see if it works now clone the repo locally yeah sure let's do that and so now it's
working so whatever key whatever personal access token that is somehow being loaded in there did not have the
correct permissions we were able to get around that so I don't really want this repo so I want to go ahead and delete it
um let's go take a look and see what actions we have there we have delete so I'm going to hope that it just will like
give me a wizard so I don't have to pick it so we'll say GH repo delete um and I don't want to delete our
current one so I'm going to hit contrl C because that's totally not what I want and we'll go back over to here and we'll
take a look and see what we can see for the uh repo I really wish they would list them up here maybe if we go to the
easier it's not great but uh we'll go to repo here and we'll go to delete and we can put the repone name
here so we'll go back over to uh GitHub here we'll go to our names this is going to be Andrew WC
Brown and we should be able to see two repos in here there's our other one I'm going to
grab that name and I'm going to go back over here and I'm going to say that so do I want to
front of it we'll hit enter and that repo should be now deleted we'll go back over to
here okay and it's deleted so that is in good shape so that's all good so I'm really happy with that I'm going to go
ahead and stop this uh code code space I'm just going to delete it and you know I'm gonna delete the
other one I'm just going to make sure I keep everything nice and clean and and not worry about overspend and what I
want to do is just merge this into my other account again you don't have to do this you can just watch me do this but
it's good to watch because this is something that people have to do a lot and actually what I should have done is
I should have gone over here and I should have done it from here first so I'm going to go here and
create a PR create create a new PR and we'll say create is going in the direction we want yes it is we'll hit
create um fix Dev container we'll create that pull request good we'll switch back over to
our other account and then I'm going to go into uh the same repo I'm going to accept it confirm it and we are in good
shape I'll see you in the next one ciao [Music] a common strategy for authenticating uh
to perform git operations on your remote GitHub repo is by using an SSH key uh you're definitely going to want to use
this because it is a great way um to work with Git uh in your local developer environment it's definitely the way that
I like to use it as opposed to using a personal access token but uh the way it works is you'll have to generate out
your own SSH uh key using a command like s key gen for Linux there's probably different ways but that's the way that I
know and specifically an RSA key I'm sure there's different kinds of keys that uh GitHub will take but that's the
one that I know and so the idea here is that we have our computer our local computer and then we have the server
which is a GitHub and the idea is that there will be a copy of the public and private key on your local computer and
then on GitHub you'll store that private key and so the the process of authenticating and authorizing is going
to go like this so the server checks to see if you have the same public key if you both have the same public key it's
going to send you a challenge message that challenge message contains the public key uh encrypted and so it's
going to send that on over and then uh the idea is that um the private key on the local computer will decrypt it and
then it will be able once it decrypts that message it will then take the private key and do something there and
then send back a well it won't do something it'll send back a signature um and then that signature will be sent to
GitHub and then they'll verify it and that will establish that connection allowing you to use SSH keys to get
clone or push or or things like that uh under your account settings SSH and gpg keys is where you'll be able to add the
public keys so here you can see I have a couple SSH keys in one of my GitHub accounts and it's there under SSH and
gpg keys and when you want to go use um or clone a a repo you're going to really want to use that SSH style address it
will not not work with the htps one or the GitHub CLI one technically will work at the GitHub CLI one but that's what
go in this video I just want to show you quickly how to create SSH Keys we did cover that in the quick and dirty uh git
crash course but it's good to do things more than once so that we really get good at them I'm going to open up a new
environment here in code spaces and we'll get that spun up so I'll see you back here when this is ready okay
all right so we have this environment spun up and it's just not remembering my uh settings whatsoever but you know
that's just how things go and I'm going to make a new folder in here I'm just going to call it um SSH
keys and I'm going to just go ahead and make a new read me file so normally what we'll use is
the SSH keyin command I realize that's small I'm just going to jump it up like that and
so we use T RSA to make an RSA um uh style key I would imagine that GitHub can support different kinds not
that I know much about the different kinds let's go take a look here and see what we can see so that's from local I'm
going to go ahead and delete that um because I don't need that right now if we go here you can see things like SSH
ecd SSH key key gen ecd so I want to see if we can generate something that's a bit more
secure I I assume it's more secure is a relatively new crypto cryptographic solution um it's been around for 5
years okay so how do we make it and the way we make it it's probably by supplying this like that okay great
so what I'll do is go back over here and we'll change it from RSA to this one and see what happens I've never
directory I'm going to go ahead and hit enter say allow yes I want to paste we'll hit enter and see if it can do
directory so I suppose that's fine and we'll do that and that and so then we're going to get that key and
then the idea is that we can then go ahead here and then we could go and cat out the contents of
that's actually really nice I like that and then we could go over here and then we could add it so just say uh Cloud
developer environment for CDE and we can add that key now I want to point out that you can add keys to
repo as well so we're not going to really test this to make sure it works I just wanted to generate out another one
and show you that but what we'll do is we'll go over to our repo because I want to show you where that deploy Keys
things is and if we're in a repo like this we can go to our settings and there should be deploy Keys down below and we
could add a deploy key and it's the same process you just paste it on in you save what it's from uh you can say whether
you want it to have right access and boom there you go but a lot of these um repos only need readon so especially if
you're building you're just cloning the repo so you don't need that but that's all I wanted to show you so going to go
just switch into my other accounts and I probably can merge the other way too um if I try that so if we go here to PLL
requests we go to new PLL request and I could probably try to grab from that other repo so I want to bring in from
here from there yep I can do that and then I'm just basically allowing myself to pull the changes and without the
other Andrew having to uh put it forward okay there we go and we are emerged I will see you in the next
the use case for deploy kees are if you're using let's say a build server or a c cicd third party service that needs
to clone the repo so they can perform a build or deploy or maybe single repo access so instead of using a shared key
pair for the uh for multiple repos you have a single key pair for single git uh git repo and another reason would be to
avoid using the personal access token um I'm going to tell you I've definitely used deploy Keys especially if you're
not using GitHub actions and you're using third party cicd tools which is pretty common with GitHub um you will
find yourself using deploy keys and I just want to make aware that's very similar to the other one with some
advantages so you have to decide in your use case where you're going to want to use it but it's as simple as that
tokens or specifically personal access tokens or Pat which is an alternative way of using a password for
authentication now Pats are not specific to GitHub but they do utilize them uh and the purpose of these tokens is to
give you access to the API um when you're making direct calls or using the command line or using the SDK uh GitHub
no longer supports the use uh of using a password directly with interacting with API they used to but now you have to use
an access token uh if you're coming from like the adus world it's kind of like your access secret um so it's giving you
that access uh there are two types of paths on um GitHub you have the classic token they're less secure and no longer
recommended for use customers with Legacy systems may still be using the classic token like some of my apps
uh then you have fine grain uh personal access tokens these Grant specific permissions they must have an expiry
date uh you can only access specific repos um or if you want all repos it'll probably only be readon you can only
access resources owned by a single user or organization you can find the stuff under the developer
settings um there are a few use cases where we'll use uh Pats it would be like logging in using get clone for
HPS and uh let's say we are using uh the GitHub CLI you could set a token or sorry a environment variable called GH
token to be used uh for for the GitHub CLI for the to pick that up uh if you're using SDK you're going to be supplying
your token I would imagine that the uh that these um sdks would pick up that environment variable as well but there
this fall along I want to take a look at personal access tokens yes we've already played around with them but let's play
around with them a little bit more okay and so what I'm going to do is go over to my repo and I might already have a
code space from before and it's already still active so I'm going to go ahead and open that to save myself some
trouble if you have to launch a new one you can absolutely go ahead and do that remember to close your codes places so
you are not using up your free tear usage uh you get so many hours per month I don't know what it is uh you can look
it up if you want to know or we'll find out when we make it over to the uh code spaces section in the course um and so
what I want to do here is I just want to uh work with personal access tokens now we might have one set from before so we
going to take a look here and see what we have so going type in EnV grep G and see if there's anything set and there's
nothing set here so that's great and what I want to do is go to the top left corner here go to settings and then down
below we'll go to our developer settings and we have our personal access tokens we have fine grain tokens and token
classic now something I would like to know I'm going to go ahead delete this one what I would like to know is can we
generate them out from the GitHub CLI you think you wouldn't be able to because then you need permissions to
have permissions but I'm going to take a look here and see what we have um because I'm just curious to see if it's
actually there or not and so I'm going to just type in token and there is a token so it says this command outputs
the authentication token for an account on a given GitHub host well that's really interesting so
what would happen if we wrote that in so I'm going to type in GH author token and see what we get and we actually get a
token back so I'm not sure if that means that um that's that token code but let's find
out if it is by generating out a new token another thing I might wonder is like what permissions do we have I'm not
sure if it would tell us I don't think so what's refresh refresh our token expand or fix the permission scope of
the store credentials that's kind of interesting like you go out and request to have more
permissions but I'm not sure exactly how that would work um but let's go ahead and generated a new token I'll just say
um create issues and I'm going to set this for one day okay and we'll go down below and
we're going to select it for a very specific repo this one here and I'm going to go to permissions and I'm
looking for issues we'll say yes to issues read and write and we'll go ahead and generate that token and so we can
see that's what this token looks like we're going to copy it bring it over here and I'm going to set it to this so
I'll say GH token equals and then a double quotation paste it in enter and so now it should
be set so if I type in E EnV grap uh GH we should see it there excellent so now let's type in Gs off token and see
if we get a different value notice that this is the one we have now it shows that we're using a personal access token
so this is getting load in loaded in by GitHub Hub how it's getting in there I don't know but this one looks exactly to
here and we will say create and I'll scroll down and grab an example so we'll do this and hopefully it'll just know to
create it in the current repo enter and we need to set a default repo so we'll go ahead and do
it oh this repo has um issues disabled that's interesting so we'd have to go ahead and enable that so I'm going to go
back to our repo here I'm going to go over to settings and we'll go down below and we'll turn on issues now we could do
this via the CLI but it's just easy to checkbox that and we'll go back to our environment and we'll hit
like I unet it uh what would happen so what I'll do is I'll just hit up till we get to that set and I'm just going to
purposely set it blank okay and that way this shouldn't work and so now if I do GH list issues
list I wonder if I can get a list of them now remember this repo is public so it's going to work because it's public
but the question is can I delete the issue so I'm going to say delete and um it's expecting some args
if I type help will tell me how this works the number of it that's perfect so the number is two so that's easy and I
wonder if I could just put a two on this type two to confirm yes okay it deleted it now the thing is
is that yes I was able to delete it but just understand that this thing has some base uh access underneath that original
token and so it probably had permissions this one here to do that there's probably some things that this thing
can't do and we learned that before which was that uh being able to create a repo um but uh you know if if I did this
on Local Host I would assume that this would have not worked and that's totally fine I think that satisfies us for
learning about uh personal access tokens I'm going to go back to our personal access token go ahead and delete it and
we'll call this one done okay so I'm going to go ahead and go into personal access tokens and delete this and if you
ciao let's quickly talk about read me files these are markdown files that provide documentation and structional
information and a repo a GitHub repo that has a readme.md or readme and all caps or readme and all caps. MD in the
projects route will be rendered on the homepage there's actually some other markdown files that will also be
rendered there but it's really important to remember the read me one as it will probably come up as an exam question and
they'll actually ask you like what uh where where should this be located it's always in the root that's what's going
to render yeah I don't think it renders anything else but that one and they might kindy to trick you there um you do
get this nice little table of contents on the right hand side so if you have uh headings it will figure that out for you
there I remember that wasn't there before so that's really nice um but yeah it's as simple as that okay
so GitHub wants you to know about basic repo navigation what that means I don't really know so I'm taking my best guest
to show you what that is uh So within a GitHub repo you will have a navigation bar with various features of your GitHub
repo and this is how you get to all the cool stuff and the main one is code this is where your codee's going to live um
such as files folders things like that um we have issues that's for tracking problems it's basically a ticket tracker
we have pull requests that's where we're going to be uh uh when we're managing collaboration
with other uh developers and they want to bring changes into our repo it's our opportunity to uh check that work before
it gets merged in we have actions that's for GitHub actions uh for projects that's for GitHub projects uh for wikis
that's the wiki security is like a a list or a um a checklist of things that you should do I think it changes if you
are looking at the context of a uh as a a user and you're not the owner of the repo you're going to see
maybe the security policy or different information but if you are the owner you're going to have a checklist of
things you should do and it basically mirrors kind of what's in the settings page under security so it's kind of
weird that they do that but that's how they do it then we have insights this provides uh statistics mostly in the
form of charts and graphs about the repo sometimes this information is public sometimes it's private then you have
settings this is where you control all the settings for your repo um the other thing is that when you are using at
least in the code section you can navigate around files so I showed you this in a a prior um follow along but
the idea is you can search stuff you can see the contents of file you can comment on code per line so there you
right now I have it set as uh my personal account but you could drop that down and you could also choose an
organization that you belong to repo names are scoped based on the account so you can have the same name uh
for different organizations other people can have the same name if they're a different user so just understand that
you can do that uh you need to choose an available GitHub name again based on that scope uh your repost can either be
public or private um pretty self-explanatory you can quickly add a readme file a doet ignore and license
it's very very important to remember those three because you might get an exam question asking about the three
things that you can quickly and easily add if you're using the CLI you can add it uh this way or create one we did this
earlier when we did the GitHub CLI kind of demonstration and we found out that repos require special additional
permissions or personal access token permissions um that the GitHub Cod spaces would not allow you to do but
yeah there you go that's a create a [Music] repo all right let's go ahead and create
ourselves a repo I know we already know how to do this but it gives us an opportunity to talk a little bit more
about uh some of the things that might appear on the exam you can create one up here in the top right corner and I have
this double click problem so it's getting a bit confused I can go to this new green button that's usually what I
do and we can have a new repo so I'm going to say my uh cool repo right we can provide a description um this repo
is amazing and we can set it as public or private I'm going to stick with private for now and we'll add a read me
we'll drop down get ignor we'll say maybe we're working with Ruby so we'll get that by default and we'll add a
license like MIT we'll go ahead and create that repo um I want to point out that you have other things that are
rendered here so that should be very clear um we have releases and packages which we should talk about at some point
I'm going to leave this repo around if we want to play around with it um but uh yeah it's pretty clear how to create a
repo it's not hard let's go take a look at a more popular repo like Ruby on Rails um so I'm just going to type this
up here type in rails and we'll take a look and see what they have um so if we go into here you'll notice there are
just more things we have codes of conduct right we have our MIT we have our security policy if we go up here to
the security tab this is what we can see um so we have the security policy being rendered out here and then it's showing
um possible uh exposures probably based on one of the scanners I'm not exactly sure which one is showing that um but
that is that there if we go back to our own repo we might be able to take a look at security here hold
on uh going back to wherever that one was we just made that repo it can be a little bit of pain to sometimes find
your own repos I'm not sure why they never made that easier but just how it is and if we go down we can find myel
repo here and there's this security Tab and notice that it's listing out uh things that you should do for your repo
it's important to know what these are because um well if you have this uh the exam is going to probably ask you like
what's what things can you access from here and this is just from the public repo if you have a private repo it's
going to be different sorry this is private public can be different so let's just open this up and make a comparison
and see what kind of difference there is here so if we go over to repo here and we go into this public one over here
what options do we get so we get a lot more going on here so so notice that this is the private
and that is the public probably because if you're if you had a paid version You' then get additional code scanning and
secret scanning for public repos you automatically get that stuff I just want to point out that all this stuff is also
under your settings under uh Security Options so they kind of just like repeat it there you know so it's just what they
do but hopefully that makes sense and that's all I wanted to show you so I'll see you in the next one ciao
[Music] let's talk about maintaining a repo now what's unusual about this slide content
It's Not Unusual but it's the fact that in the outline they have a whole section for GitHub Administration and uh for
whatever reason I have it over here as opposed into the other section because of the way the outline is designed so
understand that I'm not going to cover that stuff in that other section because it's just repeated um but anyway let's
continue on and look at maintaining repo so the first thing is your name so you can change the name of the repo if you
do not like it as long as the name is available you can absolutely do that and a reminder that repo names are scoped
based on personal or organizational accounts or organization accounts I I keep writing organizational but it means
the same thing uh you can change the base Branch the default Branch uh you can rename it um just so you know Maine
is the unspoken best practice for naming your base Branch everybody does it the old one was Master nobody calls it
Master anymore um you can opt in and opt out of some features feates for your GitHub repo I say some as a catch all
just in case there's features that do not show up or there's ones that are locked in um but that's pretty
straightforward you just check a box and you might have to do some additional configuration then there's the danger
zone which contains actions you need to think twice about because they cannot be undone if you make a big mistake and in
the danger zone we have the ability to change the repo visibility um it's important to understand what happens
when you make a a a repo from private to public the code will be visible to everyone who can visit at github.com
anyone can Fork your repo all push rule sets will be disabled your changes will be published as activity now will this
show up in the exam probably not I didn't see it but that is something to consider um you can disable Branch
protection rules uh so Branch protection rules are strict workflow rules uh uh that dis like will do something like
disallow someone from pushing to Main and uh you can temporarily disable them if you have to apply quick fixes and you
can't work around those rules easily you can transfer ownership to somebody else and they'll become the owner of the repo
is I want to create a new repo I'm going to make it in my other account uh just because we're going to want to transfer
from one to another but I'm going to go here and just say say um uh under here Omen King and I'm going say my cool repo
2 okay and I'm going to just make this private and I'm going to add a read me here um I'm going to go ahead and create
this repo okay so now that this is REO created we can go over to our settings and let's say I didn't like the name two
I need it to be three because we already have another one called two in the other account I'm going to rename that and it
renames before we used to rename it used to take time now it's instantaneous which is really really great if we
scroll on down here we could change the main branch we could change the name to main two um there are some uh
limitations there because it's not showing all of our options as we don't have other branches apparently it
doesn't create it instantaneously but it will change it in a bit of time there it is so I think it has taken effect if we
go here and yeah it shows that it's been renamed so that sounds really good we're going to go back over to settings notice
that we can check box and uh checkbox on and off features let's get rid of issues let's get rid of projects if we go over
here to our code notice that they have vanished across the top so that is great we'll scroll on down um we have more
abilities somewhere here in the danger zone so we can change our visibility it will give us a warning about it I'm not
sure why it's making it so hard I'm going to make this public I'm going to say that I I accept the changes and
we're going to make it public and then apparently I have to confirm so I'm going to get out my phone and it wants
me to enter a number into my phone okay so 37 approve great and I'll wait a moment for it to
take effect there we go so now it is public um we can disable our Branch rules so I'm going to go ahead and do
that uh we can go ahead and transfer this repo I'm going to send it to um a specific person so this is going
accept so I don't think it's instantaneous so if we go over here do I have the repo now if I go up here
check the email for this account and see if it shows up there okay all right so in my email we can see over here that uh
that GitHub repo templates is is a feature for public repos that allow other GitHub users to make a copy of the
contents of the template repo to use as a starting point for their own repo and I believe it's only for public I haven't
checked for private but I don't know why you'd want to use it for private we set a uh a repo as as a template by checking
box on the template repository and then you'll have this option to use the template and when you click use this
template it will then ask you to choose a new name and it'll say what do you want to include and then it'll show that
that was generated from that other template I use these in my boot camps because I will create a starter project
and then you will uh start from that template how are these how is this different from cloning or forking um the
idea here is that um you're starting with a clean repo you're not having all the baggage of of stuff that comes with
it it is a really clear a clean repo and so that's how you're going to go ahead and uh utilize it but that use case that
I explained for which is like you know you want to have something like a project that people are going to start
ciao hey it's Angie Brown and this fall along we'll go ahead and make a repo template so um we have
here um let's go over here to our other repo trying to find it sometimes it's a bit hard to navigate
repos there we go that's a bit better and so we have this my cool repo what I want to do is convert this repo into a
public repo for now just so that we can utilize this feature I want to go to the top and see if we can make a template
actually it looks like we can I'm really surprised because what would be the point if um it's private you know what I
mean because we go here how's anyone going to click that button it doesn't make any sense so I really think that
that is a public feature and notice I actually checkbox it on and it turned off so I think it
make your own from it so I guess my thought was it was always about other people using it but I guess you could
make your own default template and use it for yourself so I guess it's not just a public feature but they're never going
to ask you on the exam so I'm not going to fix the slides because of that let's go ahead and uh create a new repository
from this template we'll just say my cool repo 2 and we'll make it private and also we can include other branches
but I don't want anything else and there we go so now we have this uh repo that is making a copy of we'll give it a
moment and here it is you can see it's generated from that one and there you go [Music]
ciao so in GitHub you can clone a repo programmatically three different ways uh we have the first which is https uh you
will have to supply a GitHub username and password on the Clone and you'll need to set uh get to cach the
credentials if you don't want to keep entering them in I didn't write it in here but when it says password we're
talking about the um personal access token because GitHub does not let you use passwords anymore um but in the
documentation it kind of suggests that you can use a password because it says password protected oh not here but under
here would it would it would say that um for SSH uh you can utilize that method you'll have to have an SSH key pair and
you'll have to upload that to your GitHub account then we have the GitHub CLI um so we can do that as well this is
going to use credentials uh when you do GitHub login it'll actually use either a personal access token or SSH we can also
clone repos in the GitHub desktop and we can just download a zip it's not really cloning but we can download the contents
of it um so that's pretty straightforward there sometimes you just want the the the codes repo um I want to
point out that we did all this in the quick and dirty git and GitHub crash course so if you're wondering how to do
these three make sure you have watched that video and you've done it all okay see you in the next one
drop down code you or sorry add file you'll have create new file upload files so it's great for both text and binary
files um and uh the idea here is that when you uh go ahead and add a file if you need to have folders you can put a
forward slash and it will end up creating as many folders as you want and then you put the name in and you just
create that file there when adding multiple files that you also need to edit you can also quickly use
branches in GitHub and git um something that you should really know how to do is to use this single line command get
checkout hyphen B to both create and check out or branch in one go you should absolutely know how to do get push
hyphen U origin staging which is basically The Hyphen U is short for hyphen hyphen setup stream I know look
like a single hyphen there but there's actually two uh for this flag you can create branches from issues um and then
the branching issue will be Associated and linked you can directly create branches in the GitHub UI you can create
branches in the GitHub desktop and I'm sure you can create branches in the GitHub CLI and uh yeah there you
nothing super difficult ult to do but um something that will take us just a moment I'm going to go ahead and use um
our my cool repo and what I want to do here is I want to go ahead actually I want to clean these up I don't want to
have a bunch of uh junk repos hanging around so actually I decided against that and I just want to keep things
clean and keep with our single repo even if it is um public so I'm going to go ahead and typ my cool repo here and
clean this up Andrew WC Brown or man gez Andrew WC brown brown my cool repo it's that auto
complete that's messing it up there we go and I'm just going to go ahead and also get rid of this one and then we'll
there we go okay so now let's just go back to our GitHub examples and there are a few ways that we can create issues
uh one is or sorry branches one way is to go to the branches Tab and I'm pretty sure we can just create them right here
so I can go here and just say uh feature uh cool one okay and we can say what we're branching
from so that's an example and and there it is uh another really useful way and something I really recommend is you
create an issue and we'll and we'll make a test and create a branch from an issue okay and then what we can do is
then go ahead and create a branch here in development in the bottom right corner no it's going to put the number
in here that's a very common pattern is to have whatever your issued number is and then a a a short name for the Branch
okay then saying like what should we do next code spaces locally I don't really want to do anything next I just want to
um do nothing but I'll go ahead and hit create branch and so then it gives us like a button where we could click to go
ahead and play around with it so that is something I do a lot when I teach boot camps I show this a lot this is a very
common workflow and something you should absolutely remember uh that you can do um let's go ahead and launch up this in
codee spaces so going to go here and we have this one here sure I'll launch up this old one sure why not and once this
implementing uh checkout and creating a branch in one go okay so just back here when this is ready all
right all right so this is back up and running and notice that we are currently in the main branch we can type in get
Branch to get a list of branches and so what I want to do is I want to create myself a new Branch here
and actually there are other branches it's just not showing them if we do get poll um it might show us those other
that doesn't mean we can't check them out but uh our um our uh program is aware of it we might also be able to
create branches within a git graph these were things that we installed earlier so they might have the option to do that
here I'm not 100% sure see create Branch so we could do that as well I'm not going to do that
but what I really want to show you is something that will show up in the exam I'm actually really surprised that it
did but it is a really good thing to to um uh to know is that normally when you create a branch you'd have to type in
get branch and I'd say my new Branch like this and then I'd have to do get check out my new
Branch okay okay and so that's something you can do but what is more efficient you can go
check out back to main is we can do that in one call we can see get checkout hyphen B for branch my new Branch 2 and
that's going to create that branch and check it out so really make sure you remember this one because it will
absolutely show up on your exam it's absolutely something you'll use every single day I strongly recommend it the
last thing um I want to go over is just pushing to origin so what we can do is is set um Upstream but I think that U is
a lot easier to do we say origin and we just say my new Branch here two and then that will push that Branch up there
because you really want to know how to set that origin so that's all I really wanted to show you and um yeah we can
just stop this environment so we'll go ahead to command pallet just say stop current code space and I'll see you in
releases with release notes and linked assets such as zip sources or binaries for specific platforms so in a get repo
you'll see releases and from there you can read about the release and see all the stuff that has changed and there
might be the source code or binaries if you if this is something like um I'm trying to think of an example like let's
say you like to play video games and there's an emulator uh for like the PlayStation uh they'll have like builds
here for Windows Mac and um other things here uh maybe you make a video game in general you could distribute the binary
here it's anything where you're Distributing the binaries or the sour source code but you're making very clear
what has changed as a release um and often like when I'm having issues with something I will actually go through and
re read the releases when let's say um like react router have you ever used react router Dom and they change
versions I'm trying to understand like what compatibilities have changed what does not work anymore maybe I was
experiencing a bug and if they do a good job with the documentation it will be in there so there you
as it is a very useful feature to let people know about changes that are happening in your repo um so before we
do let's go take a look at some other project that might have changes I'm trying to think of something interesting
like maybe an emulator um so I'm just typing emulator in here I'm just looking for one here's
a Nintendo switch emulator or maybe PS PS2 I'd rather do PS2 I feel like they might have good uh information about
changes like PCS pcsx2 and on the right hand side you can see we have releases and if we open up the
releases um they do not tell you much okay so that's not a great example so we'll go back to another one and we'll
try another uh video game emulator maybe we can just try putting the word emulator you know and people can write
whatever kind of releases they want this doesn't have releases uh we'll try the 3DS emulator
releases no well they do they do no just tags okay so I guess we're just going to not get really good ones here but I
guess we could just go to rails cuz they seem to always have enough for us but I just wanted something that was written a
little bit more uh nicer so you could see a good example of a release but here is one where um you know they're talking
version right that uh we get better information like when rails first came out like 700 probably would be a good
one so I'm just scrolling through here I mean these are pretty good but again I just want to try to find a major release
one is this this 1.0 seven I'm going just type in 7.0.0 might have to scroll a bit to get
it okay well I mean 71 was pretty good so we'll go over back to 71 and uh yeah like here it's telling
you about all the libraries and things that are Chang this one's a bit better because it's showing examples of things
that have changed so you can put whatever you want to release and it's just to help communicate what the
differences are because a lot of times you just don't know so we did do some tagging and that's a great opportunity
for us to create our own release in our own repo so what I'm going to do is go back over to our home I'm going to find
this repo that we have I'm going to go over to releases and we're going to create ourselves our own release and
uh sorry but I'm in like full snowsuit right now and I got the heater blowing so if you hear the heater I apologize
but it's just how it is right now in my office anyway we you can choose a tag which is great and then um you can then
attach the binary so the idea is we would download the zip and then reup loed again I'm not going to do that
because that's pretty straightforward we'll go ahead and do that and so there's our release and that's all I
really wanted to show you for releases apparently there's a compare button I didn't even know that so that might be
hosting and managing packages including containers and other dependencies and the things that it can uh host in terms
of its package registry or repository is Javascript packages ruby gems Java Maven and Gradle packages net packages and
Docker images and the last one I think is going to be the most common uh one that people can utilize um they have a
free tier they have a pay tier so you can start using this right away and we will go give it a go um probably the
easiest thing we could do would be to create a Docker uh container and then push it to GitHub packages so that's
kind of the code that we'll have to go through we can make a simple hello world Docker file run it make sure we build it
and uh GI up actions could be used to build and then publish I I wrote the word public but publish packages to
along what I want to do is go ahead and create a GitHub package so what you're going to need to do is start up a code
space I actually already have mine running I think by this point you probably know how to do that I just uh
was working on the video earlier and I had to close out and restart so that's where we are now so anyway I'm going to
make a new folder in here and this will be for uh pack GitHub GitHub packages okay and you're going to try to spell s
if you hear a bit of noise in the background it's because I'm running the little space heater near my feet so my
feet don't freeze I'm in full snowsuit right now okay but anyway what we'll do is we'll go
into the package of directory and we'll make a new file and we'll call it Docker file okay and from here here um we want
Alpine uh latest this isn't a Docker course I'm not teaching all this but just follow along with me here or just
hello world okay so I'm going to go ahead and save that file and now what I want to do is go into that
other part here I would say is now that we have our Docker image built we want to now push that so I'm going to make a
new read me file so I just have a little bit of room to work with here and there's a few things we need to
do we need to set our username so I'm going to say username here I'm just going to set mine as what I am Andrew
Brown WC Brown that's the account that I have here okay and I'm going to put export in
we'll hit enter it actually double pasted it so I got to be careful there I imagine this is a bit hard to read so
I'm just going to bump up the font a bit and so that should be set the next thing is I'll need a personal access
token so I'm going to go over here and we've done personal access tokens quite a bit in this course and we'll go down
to developer settings and we'll make a new one and this one is going to well I guess we got to confirm with GitHub
packages oh I'm not registering an app whoa whoa whoa whoa whoa let's go back personal access tokens there we go
there's an old one I'm going to delete that and I'm going to generate out a new one we'll
say uh GitHub packages I want this only to be for a day just in case I forget it and someone
[Music] token so I'm not seeing packages in here and this makes me think that we should
probably use a traditional one I don't normally do this but because I cannot find it I'm going to use a classic
token so that's what I'm going to do do um I'm going to go here I'm going to generate a new token really wants to
make make a new one and just say GitHub packages and so it's really similar um very similar to the other
packages delete packages so I'm not sure where that new one is in the new one maybe it's not and we just have to use
the classic token we'll go ahead and generate that token out I now have this token I'm going to bring it on over to
uh uh here and I want to set this as our environment variables I'm going to say GH token and we're going to paste that
in okay and we'll copy this good I'll paste it in and hit enter so now that's set um I need to set the image
name I'm just going to keep putting the GH under it image name just so it's a bit easier for me to find them when I
want to find all the variables later hello world we'll go ahead and do that and copy that and paste it in hit
enter uh I'm going to put export in front of that just in case that didn't work and then I'm going to do export my
like7 it's because my shed uh has a um oil furnace or gas yeah o furnace and I have to go drive out to um The the
Reserve to get some gas and it's just really bad weather so I'm just trying to get this done so you folks can get this
course as quickly as possible so we've exported those values so that's really good I want to make sure that they're
set so I'm going to type in EnV GP G and there they are okay so now we need to write some code the first thing is we
need to log into Docker and so Docker somehow talks to GitHub I'm not sure how that works but we'll just go ahead and
work with it here so I'm going to say Docker login um we're going to say g c or say GHC so that's for GitHub
container repository I'm assuming hyphen U and then we'll say GH username and then we'll say password STD in okay so
that will pass the password that's what that pipe does and see if that works we'll hit
enter uh doesn't know what that flag is H did I type it wrong I did and we'll copy that and we'll paste
that there enter and so now we're logged in so that's step number one step number two is we need to tag our Docker
container so we'll say uh Docker tag and I want my image name and then my version uh this will be GH version and
this will be GH I mean we could just write it out if it's a bit simpler for everybody so
we just say hello world one0 and then on the other side be GHC doio and then that would be our
username so I could just type it in here it's up to you whether you want to use environment variables or not I'm just uh
I just want it to work so I'm just going to write it out in full even though we made environment variables up here for
this particular use case but that's okay one z z so the idea is we're going to tag um our Docker image we built is uh
for 1.0 to map to here okay we'll go ahead and copy that hit enter um no such image I mean there is
no tag called 1.0.0 on it we do Docker list or Docker images it's called latest so we'll
change this to be latest we can make the other tag latest as well but I I'm going to make it 1.0.0
I'm not sure why I'm just going to do that so how it's going to be we'll go ahead and enter and so now that's tagged
tag does not locally exist with that tag so maybe I tag tagged it wrong T HC R IO I spelled it wrong yeah we could have
just copied this one here to get it all right hit enter and there it's pushing it okay so pretty
that and we could have done that I'm just cleaning it up doing that uh post post refactor here and um here we could
version and so you get the idea there so that's pretty much it let's go take a look and see where this actually is I'm
and um this revokes all personal access in class classic just in case I'm going to do this so I'm going to say Andrew WC
Brown I think they're all revoked but just in case or not let's do that anyway and we'll go over here and we'll take a
look at our profile because it might show up under packages here I'm not exactly sure how you set public packages
oh there it is it's private all right and I'm not sure if it's specific to a repo link this package to a repository
so I guess if we wanted to link it we would have had to apply a label and so I guess labels are what you think they are
they're labels and that way we could associate it I'm not going to do that here today I think that's fine but I
um we'll go ahead and save our changes okay and this really doesn't want to push here today I'm not sure why get
here oh is my token in here wow that's cool it actually detected it so that because we turned on security scanning
earlier it would not let me do that that's awesome I wish we I I thought I was showing that earlier so
don't show token actually what we have to do uh is we can't even just push the old one we have to amend uh the last one
this so I'm trying to figure figure out if there's a way I can just quickly amend um I'm going to get rid of this
change here first I'm just going to say discard this change and I'm going to type in get
it actually no that'll just amend the message sorry that's not going to do what we want
uh what we actually need to do here is we out of amend mode I think we are yeah um I'm going to do get
and so now I can go back and fix that but that's really good that uh I had the the secret scanner turned
on do not commit your GitHub token but we did also delete it so it's less of a problem as well
ciao hey this is Angie Brown and we are taking a look at poll requests often abbreviated as PR and you'll see that a
lot it's very common uh is a formal process to put forth changes that can be manually or automatically reviewed
before it's accepted into your base Branch your main branch here so here are poll requests in GitHub and the benefits
of poll requests is collaboration review or collaborative review so enhances code quality through team discussions and
peer feedback CH it it tracks changes says change tracking but tracks changes provides a record of code changes and
related discussions automated testing enables integration with tools for automated checks and tests controlled
integration manages safe and reviewed merging of code changes open source friendly simplifies contributions and
collaboration at open source projects I want to point out that PO request is not necessarily A get feature but a a
workflow and GitHub has built a bunch of features around poll requests poll requests aren't unique to GitHub um it's
just part of the workflow and and whatever they want to build around it they can build around it and that has to
do with other tools that do the like jira and bitbucket or whatever other tools like git lab and stuff
really cool way of doing it the other way is using uh the GitHub website so you go to the pull request tab in your
repo you create a new pull request and there it is I'm sure you know by now how to create a pull request because it's
almost impossible not to show you that 100 times over before we got to P requests here but that's how you create
them one thing I want to point out is um uh that we have to set a base this is who we're going to merge into and ahead
who uh the changes that we're going to pull into notice it says compare we'll talk about that in the next slide about
direction of the merge for a PLL request and the idea is we have the base so is who you want to merge into this is
usually the main branch or an environment specific Branch doesn't have to be main but usually main is base
right um then we have compare this is what will be merg into base uh compare is choosing the head reference so notice
before I called it head and if you look very closely it says choose the head reference so that's what compare is this
is usually a bug or feature Branch another thing that I need to point out is that you can compare across Forks uh
this is useful if you're trying to contribute to an open source project so you're going to have the option to Cho
the base uh repository um and the HUD repository to different uh repos from different owners so there you
request but Mark it as a work in progress uh the use cases for draft pull request would be indicating work in
progress so communicates the poll request is not ready for review or merging preventing premature merging
ensures incomplete work is not accidentally merged facilitating early feedback and collaboration so people can
talk about it continuous integration testing so maybe you want to just run uh a test code because when you have a PLL
request it's automatically going to start doing that transitioning to a ready state so easily switch from a
draft to a ready for final review and merging organizing work and priorities help in managing and tracking ongoing
work in large projects um draft requests draft pull request is a feature only for GitHub organization teams I believe you
can use it in the uh free um free organizations that are public so we'll take a look and see if that's even
possible uh if if I don't make a video then you know that it wasn't um there are two things you need to remember for
this for the exam uh draft pull requests cannot be merged code owners are not automatically requested to review draft
pull requests okay so remember those two things because they will show up on your exam ciao
hey this is Andrew Brown and in this fall along I want to take a look and see if we are able to open up a draft PLL
request in our GitHub organization free account that we created so what I'm going to do is make a new repo because
we're going to need to need we're going to need to have one in our um our one here and actually you know what I'm
going to do no no no I'm just going to uh I don't want to make it too complicated I'm just going to make a new
repo so just say uh like fun repo we'll say fun repo okay and this can be public we're going
to make it public so just in case we need to have that so we can have that functionality because a lot of times
public allows us to have um that paid functionality for free so what I want to do is take a look here and create a new
pull request and see if we have the option we don't have any code so we'll need to create ourselves a new Branch
I'm going to just do that in here in the UI and we'll just say uh Dev and we'll create that branch and then from here
we'll go over and I want to switch over to Dev and then I want to edit the read me here and I'm just going to put in
some exclamation marks we're going to commit that change looks good to me and then we're going to make our way over to
pull request and we'll make a new poll request and what I want to know is if I can make a draft pull request so we'll
say create new pull request and we'll drop this down and there it is okay so that's all it takes to create a draft
pull request if there are code owners assigned they can't touch it noce I cannot merge it I'm clicking like a
go let's talk about linking activity within a poll request so you can link issues to a poll request so that the
state of the poll request will automatically close the issue we actually already did this before we
looked at issues during we looked at issues and we're looking at it again so the idea is that you go up to
development on that cost and you choose um the issue or sorry the poll request and then they're linked
okay so that will close it the other way is via those supported keywords so if you have any of those words you put in
the comment of the actual poll request um then that will close it it says the poll request must be on the default
Branch okay the PO poll request must be on the default Branch for it to work I don't know if that's true I mean it
worked for me when we did it but but um however we did I guess it was on the default Branch but um uh anyway that's
that I don't think there's any point of us showing this again because we've done it so many times but hopefully you know
that you can do that and the direction to which it happens you're putting it in the uh body the description of the Polar
different statuses a poll request can be surprisingly this isn't an exam question but it's something you should know so
let's go through them all the first is open the default status when a pull request is created it's open for
discussion review draft indicates the pull request is a work in progress and not ready for review closed the pull
request has been closed without being merged this status is used when the proposed changes are no longer needed or
if the branch has been rejected merged the pull request changes has been merged into the target Branch the status
indicates a successful conclusion of the pull request process changes requested this status is used during the review
process when a reviewer requests changes before the poll request can be merged review required indicates that the poll
request requires a review before it can be merged this status is common in repos where uh reviews are mandatory part of
the workflow approved the pull request has been reviewed and approved for merging by the required number of VI of
reviewers conflict indicates that there are conflicts between the pull request branch and the target branch that needs
to be res resolved before merging ready for review a poll request initially marked as draft can be changed to this
status once it's ready for review so that gives you an idea of the functionality of it and you kind of
go so the code owners file is a GitHub specific file that defines individuals or teams that are responsible for
specific code in a repo and so the idea is that they have the syntax that's similar to get ignore and when a polar
request is open that modifies any files matching a pattern in the code owner's file GitHub automatically re request a
review from the specified code owner the code owner files go in either the root project root. GitHub folder or docs
directory um and so yeah I think this is something that is a paid feature it probably could be in free if we're
talking about um organizations teams or sorry like free organizations that have to take a look but yeah it is a very
request there are a few options so we have this drop down where we can either create a merge this will bring all
commits over into the repo we have squash emerge which will have one commit to be added we have rebase emerge which
will be added and then do a rebase the third one is a more complex one so if you're not familiar with rebase you're
probably going to be looking towards squash which is a good one the use case depends on your team's workflow they may
prefer only a single commit uh added and so that's why you would want to have squash or rebase um but you do have a
lot of options there for how you want to bring code in uh to your uh your base um your base
allow multiple required reviewers before code gets into a repo and a lot of times you can say it can't be the same person
uh that submitted the code and it's not uncommon to have like four or five people that can be required reviewers
and two or three have to um look it over before it gets accepted so here um I would be assigning myself and saying
okay Omen King this this user has to um review the code and then from there as that user I would have to approve the
changes and then from there it would change it to saying okay Omen king or Andrew Brown approv the changes and so
go hey this is Andrew Brown in this video I want to uh do a little bit more with P requests in terms of code reviews
and maybe we could try to get that code owners file to work or um you know just things that we didn't get a chance to
really get Hands-On with as we have made pull requests quite a bit but not from a reviewing kind of code proper way so
what I want to do is I want to make my way over to our um uh our pretend organization and we're going to go ahead
and create ourselves a new repo unless we already have one like the fun repo does that still exist it does so what
we'll do is go ahead and use this one and what I want to do before we do anything else is I want
to add um or make sure that the collaborator is added to this repo here and so I'm going to go ahead and add
Omen King okay and there I am and I'm going to go ahead and say this person can maintain
this repo all right and now they're part of this repo so they're there and the reason I was able to add that very
quickly was because they were already part of of a team so or sorry they were already part of our organization so I
didn't have to send an invite and confirm they're not external from that so that was very easy
um the next thing I want to do is I want to go through and uh create a pull request it looks like we already have
one here so if we already have one I'm not going to make a new one if you have to you make a new one but here it says
this poll request oh this is that draft one right so we can't do anything with this one right now uh so I'm just going
to close that poll request I'm going to go over here the poll request I'm going to try
going to go ahead and that pull request okay and so we'll create that one and I want to say that this has to
be reviewed by Omen King as soon as I do that notice we cannot proceed forward until that's uh ready and
there's another here that says ready for review so this P request is still a work in progress so it's suggesting that this
is a draft I guess I didn't notice but um it probably was still set to draft and so I ready for review
okay so now uh Omen King should know that they need to approve it for this to work okay another thing that we could do
is we could uh um require other things for approval so we could go ahead and add a Rule and Rule sets allow us to do
that so there might be some things in here that we could set um require status checks to pass so that
status checks and so that's the reason why we can't we can't add that there so if I go
pull request is having some kind of automation thing in here but I suppose we'd have to have like GitHub actions or
something else so maybe we'll leave that for when we get to GitHub actions and integrate it there and we'll just
and I'm in this uh repon and saying oh I need to add a review so I can go here and let's say I don't like this change
I'm G to go here and say request changes I don't like it make it better and so if I do
this okay now it's saying in red hey you got to do something I like it's not allowed to go through um and by the way
unless I've accepted it so maybe there's a setting that we can do in here I thought it would been that rule set that
we could do that maybe there's something else that we can do to protect it always suggest updating the pull
request branch no operation options I might not be able to do it because I'm not the admin so
I'm going to go back and switch over to this I'll go back to settings and what I'm looking for is the option to say it
merged um and so that's what I'm looking for right now maybe it's under uh rules could be Branch protection as well
um I really thought that there would have been a rule for this the reason I think that is because
using jira and and bit bucket and and they'll let you do that so maybe I can't set that I'm not sure restrict
creation yeah so I'm really surprised I can't find that maybe it's there but I can't find it and that's totally fine
let's go ahead and pretend uh you know we're going to go fix this issue so we go here and it says
I don't like it make it better so how can we um submit that fix right and the way we're going to do that is we'll
actually have to go to this branch and change something so we'll go here and make sure we're in the correct
branch and in here I'm just going to go ahead and change it again okay commit the change commit and
so now if we go back to the pull request we should be able to update this pull request so I'm trying to do that
here F's Chang normally what happens like and again it's because I'm thinking of jur and ATL
scene is that you'd open another pull request and it would be the same one the same spot so I'm not sure if we can do
that uh if I go Dev here because it's the same Branch right so if I do this it says view pull
to act like uh another piece of software and it's not doing what I think it's doing review changes I mean I could
review my own that makes no sense review in code space so I guess the way it's going to
work I really thought there'd be a back and forth there but I guess there's not is I suppose I would just have to go
here and then approve it looks great okay so yeah I guess there's less process there than I
it so next question is can we make a code owner file and see how that works because I think that would be kind
of unique to do sorry and so I believe that file is called code owners and uh what I'm going
Aston here and just say omen King and so the idea is that anytime anything changes it should
assign it to Omen king that's what I'm thinking anyway so what we'll do is we'll go ahead and commit these
from that other person they they send a poll request I'm hoping that it's going to Auto assign now this might not work
because it might be only a paid feature I don't know but we're going to try anyway so I'm going to go in the de
branch and before I do anything I'm actually going to merge it the other way um I'm going to do that by opening
up uh code spaces on dev and I'm just going to merge it back in into the direction because I'm not going to make
a pull request that goes in the opposite direction I'm not going to do that that doesn't make sense P requests are
supposed to go into your main branch right so I'm just giving this a moment here and what I want to do is do get
all up to date I'm not sure if I I'm convinced so I'm going to double check uh it's not showing our tree here so
spaces and I'll go back over to here do I have this now and show it shows that main is ahead
of death so it's totally lying to me they're definitely not in sync I knew this was not the case oh you know what I
merged when I was in main so that's probably why uh get check out um Dev get merge main it's always great to have
this open so we can see I'm going to go sync those changes okay just in case that didn't I
just get push just in case great that is in good shape I want to stop this environment stop
good we're stopping it okay I'll go ahead and close that Tab out so now I have confidence that this
code owners is in both I I just had a feeling I need to do that for some reason going to go ahead and just change
this and we'll do that I'm going to go ahead and create a new pull request and I want to see
whether it auto assigns we go here Dev create a pull request and look it auto assigned it so
it says awaiting request a review from Omen King Omen King is a code owner Omen King will be requested when this pull
request is created so that means the code owner file is working I don't need to create this pull request to know that
and I think that's sufficient um the only last thing I would say is that you know people can comment directly on
specific lines just remember that because they might ask you on the exam and that's about it we'll see you in the
and this fall along I want to show you uh some of the more advanced options for poll requests um in terms of the merge
options and why you'd use one over the other um so what I'm going to do is we going to go into our GitHub examples and
in this case we do need to open up uh some kind of editor um I think we could get away with using
github.io on my keyboard and that's going to open up the editor we're still going to need the repo open so I'm going
to go here and wait for this to load and go to repo and so I I'm going to want to go over
here for a moment I'm going to get rid of this um P request so we're not getting mixed up and um I really want to
be on development and I actually don't even know if I'm on that so I'm actually
going to just close this stuff out I might have mucked this up and we'll close this out because I need to be on
examples I'm going to do this in our other repo sorry I know I'm all over the place but I want to go back to our home
and I want to go to our cool repo so I'm going to drop down our whoops our option here and I'm not sure why they don't
never show those on the right hand side they really should we'll go here and I'm going to now um open po request and a
new tab here and I'm going to go ahead and hit period And the reason why actually you know what we can't use uh
GitHub Dev sorry we're going to have to open up code spaces because what I need is that visualization tool so we can see
what we're doing so I'm going to go ahead and open up GitHub code spaces and we're going to have to wait a little
while for this to start up unless it's already running that'd be really nice I'll be back here in just a second
when it loads okay all right so now that is ready um we want to make this a little bit easier to look at so I'm
going to switch my theme as per usual we'll go to uh a darker theme there we go and um the
other thing that I want to do here is I want to switch out to the dev Branch so we say get check out Dev okay and
another thing we're going to need is a tree so I'm going to go ahead and go to extensions here we'll say get graph
because we need to make this really clear what's going on uh so I'm going to go here and install
yet come on get graph install you can do get graph we want you there we go we want you here
get graph so the idea is that I want to put a bunch of commits here and then we're going to merge them and see what
it looks like when we do that and then we'll try a different merge option and we'll see how much cleaner it is uh
doing that another way so what I want to do is I want to go ahead and modify this file save it and go get commit hyphen M
change one and we're going to do ma to grab all of them in one go and then I want to get push okay and then we're
going to do this again save and I'm going to just do a semicolon so I can just get through this
a lot quicker so we'll do that again we'll do that again okay you're getting kind of the
idea what's going on here we're kind of crazy people are not going to be happy with all of our
we got a bunch of changes I guess we should have incremented them it just says 1111 uh we can demend those like
there's a way to rename them I don't really want to deal with that but let's just assume we didn't name it all dumb
like that we made 1 2 3 4 5 six seven or for the next one we'll we'll just name it two so the idea is that what will
happen the question is what will happen when we merge this in will we end up with one over here or all of these over
here and that's what we're going to find out so we'll create a pull request we'll make a new pull request we will merge
Dev to main we'll create that pull request I'm going to switch it back to normal we'll create that pull request
we're going to ignore our reviewers we're going to just review it ourself and notice we're on Create and
merge commit so we do this and what do we get we go back over to get graph and I'm going to go get fetch because
commits um is up here now okay so that is there it didn't squash them all into one we still have
all of the history of them okay so just to make this clear I'm going to just go get checkout main get
refresh okay and I'll just do a get push for our Dev sorry but what I want to show you is that all of this history is
still here it's not gone right it's all still here so hopefully that is clear I know it's not the best visual but I'm
hoping that makes sense what I want to do now is I want to do a um I want to do a squash okay for the
2 and actually I'm going to do get checkout hyphen B Dev 2 and then we'll do get um because we want to push this
Branch get push you origin Dev 2 and so now it's being remotely check tracked and I want to do something really
similar to what we were doing before so I'm going to go here and for each one I'm going to remove a line okay we'll
time two and we'll go here save that this will be two and we'll go ahead and save this and this will be two and we'll
go ahead and save this and this will be two and we'll go ahead and save this and this will be
two all right let's go back to our G graph we have a bunch of this stuff here let's go over here and make a new pull
request and we're going to say create pull request and this time we're going to say merge pull request squash and
merge squash and merge it's bringing all those commits in and now if we go back here and we
that's so we don't see anything here get fetch we'll refresh and so notice now it doesn't
show like a merge line coming into here what it's done it's taken all of this stuff and it's squashed it into a new
commit and has it over here okay so it really depends on what you want do you want to keep all this history or do you
want to kind of have this like over here and this has a completely separated commit here um it's really up to you how
you want to do it I think this looks a lot cleaner um um but this is totally an option as well it's really what you want
to do and I will see you uh in the next one so before I do that I'm just going to stop my
to issue templates they will populate the PO request text area with the specified markdown template so they it
what you want to use technically there is a folder called pull request template that you can use but I found that you
really couldn't leverage it because there was no UI to select from it and the only way that you could do that was
via this URL you generated with a query string so um I would suggest that you use pull request template MD and not use
the folder uh and that's where it kind of feels like it's it's like uh where you have issue templates where they have
that older version of it PO requests still feel like that old kind of version if that makes sense so there you
covering authentication methods credentials are specific to a user's identity for example their individual
username and password pin or biometric information every user including it administrators te teachers staff persons
and students as credentials an authentication method is the way a user proves their identity to a system for
example a user inputs their credentials in a sign in screen or via the Microsoft authenticator app in which they have set
up their account authentication methods can also be broken down into categories or types such as signed authentication
authentication methods vary widely from traditional to Advanced common types include passwords and pins common but
can be risky for security picture passwords and pattern locks offer memorability and simplicity biometric
authentication facial fingerprint retinol provide secure unique user identification passwordless
authentication emphasizes security and convenience by eliminating traditional passwords removing the hassle of
memorization and mitigating threats like fishing some of Microsoft's methods include Windows hello for business uses
Biometrics P for secure sign-ins and SSO Microsoft authenticator app enables phone verification with notifications
Keys password reset authentication self-service password reset with Microsoft enter ID lets users change
their own passwords without help desk assistance cutting down on support costs and improving security and efficiency
some of the key features include self-service users can change or reset their passwords without administrator or
help desk assistance security enhancement sspr improves organizational security by allowing users to promptly
address account lockouts or compromises compliance with password policies sspr enforces Microsoft enter password
policies regarding complexity length expiration and character use ensuring standardized security measures across
one piece of evidence to confirm your identity when logging into an account like a code from your phone in addition
to your password some MFA methods include SMS text message a code sent to the user's phone phone voice call
answering a call to confirm identity Microsoft authenticator app a code or biometric verification through the app
and O aut hardware token using a physical token for authentication set up ooth for external
Services oo is a protocol for for authorization that lets users give thirdparty services like GitHub or
Jenkins permission to use their information without sharing their login details it ensures secure connections
making authentication and permissions straightforward use personal access tokens personal access tokens provide a
way for users to create special tokens to access devops tools they are especially handy for command line
interactions or scripts needing direct access to these Services apply role-based Access Control rule-based
Access Control sets up detailed access rules based on users roles and what they're allowed to do it makes sure
people have just the right access they need for their work keeping sensitive information secure so that's an overview
of the authentication and credential strategies the next topic we'll be covering is get lfs which stands for get
large file storage and get fat developers sometimes struggle with handling big files in a get repository
because it can make the repository work slower and use up too much space Microsoft has two tools to help with
this git lfs and get fat git lfs is an open source extension for git that helps handle large files more efficiently it
does this by using small text pointers in your git repository to represent the large files while keeping the real file
content stored elsewhere this method keeps your repository from getting too large and slowing down to get started
with G lfs you'll need to follow some of these steps install git lfs install go to the git lfs website download and
install the version B based on your operating system configure use the command G lfs install to set up G lfs on
your system setting up G lfs in your repository track large files decide which file types to manage as large
files for example use get lfs track. mpp4 for MP4 files add attributes file add the dog attributes file to your repo
with get add. attributes commit and push save the changes with Git commit Dash and configure git lfs and update the
remote repository using git push managing large files add files run get ad dot or get ad file name to Stage
large files commit and push use get commit dmad large file and get push to commit and send files to the remote repo
lfs get fat is another tool for managing large files and get repositories it's a python script that keeps large files
separate from your git repository while maintaining references to those files within the repository let's take a look
at how to get started with get fat setting up get fat install python make sure python is installed on your system
install get fat install get fat using Pip with Pip install get fat initializing get fat in your repository
initialize get fat run get fat in it in the root of your repository track files Define large file types in a dogit fat
file for example to track MP4 files you might add asteris MP4 to the file commit dogit fat add the dogit fat file to your
repo with get add. git fat and then commit it using get commit DM initialize git fat and track large files managing
large files with get fat add large files use get fat add file to Stage large files for get fat commit and push commit
with get commit Dash and add large file and upload with get Push Pull large files on a different machine after
cloning run get fat pool to download the large files so that's an overview of how to get started with get fat
repositories addressing the slowness and space issues associated with downloading a repository's entire history and files
git scaler solves this by allowing you to download only the files you need it works well with Git lfs to use git
scaler you'll first need to set up your repository with G lfs that enable git scaler for specific file paths you're
interested in configure git scaler for a specific path install G lfs install G lfs by following the steps provided in
its official documentation initialize G lfs in your repository and your repositories directory run git lfs
install create and configure.it attributes file in the root directory of your repository create a dogit
attributes file to enable git scaler for files under the my/ large SL files directory add the following line to
dogit attributes my/ llarge SL files asterisks filter equals diff equal lfs merge equal lfs D text this
configuration tells get lfs to manage files in my large files using git scaler for efficient handling commit and push
the dog attributes file commit the dogit attributes file to your repository with get add. attributes then git commit dasm
configure git lfs and git scaler for specific paths push the changes to your remote repository get push so that's an
cross repository sharing with get sharing code across different repositories is common for reusing code
modularization or separating components of an application gate facilitates this with the following method sub modules
git subm modules let you integrate a separate git repository within another repository's directory structure it's
especially handy for incorporating a particular version of an external library or for sharing common libraries
module at repository URL path repository URL the get URL of the repository you want to add path the directory path
within your main project where the subm module will be placed here's an example get subm module at https github.com
referencing specific commits and keeps the main project separate from external dependencies note that using git sub
modules requires managing updates individually and maintaining consistency across projects that share them overall
by using subm modules you can efficiently manage cross repository code sharing and get while maintaining clear
boundaries between different projects or components the next tool we'll be covering is get subtree git subtree is a
tool that helps you include code from one repository into a specific folder of another repository it's a simpler
alternative to subm modules which is another way to incorporate external code but can be a bit complex to handle with
get subtree you can both bring an external code and send updates back to the original code Source if needed to
add a subtree you use a command that looks like this get sub tree add-- prefix equals folder name repository URL
commit or Branch D- prefix equals folder name is where you specify the folder in your main project where you want to add
the external code Repository URL is the web address of the external code you're adding and commit or branch is the
specific version of the external code you ought to use which can be a commit ID or Branch name get subt Tre
streamlines project workflow by integrating external code directly ensuring it's immediately accessible
upon clothing without additional steps overall get subtree makes it easy to work with code share between different
projects while it simplifies some of the issues found with sub modules updating the code from the original repository
the next topic we'll be covering is workflow hooks workflow hooks are essential tools in the Microsoft devops
ecosystem designed to automate and refine development workflows leading to better efficiency and productivity
workflow hooks Act is triggers for executing actions or scripts at specific points in a devops workflow crucial for
maintaining code quality automated testing deployment and integrating external Services into the process in
the context of build and release Cycles workflow hooks are particularly valuable they enable developers to automate tasks
like unit testing documentation compilation or deployment to testing environments with each new build or
release streamlining these processes Azure devop stands out and offering comprehensive tools and
services for managing the devops life cycle including implementing workflow hooks through service hooks these
service hooks allow for connecting your devops pipeline with external services or initiating custom actions in response
to various events such as new bill completions work item updates or pull requests other tools and services for
workflow hooks besides Azure Dev Ops Microsoft offers other tools and services for implementing workflow hooks
including GitHub actions Azure logic apps and Azure functions the key to leveraging workflow hooks effectively is
to identify the crucial events and actions within your workflow and use the appropriate tools for
implementation here's a simplified step-by-step guide to creating a service hook in Azure Dev Ops for automating
actions such as notifications after a successful build Access Project settings open your project in Azure devops and
navigate to project settings at the bottom of the project sidebar open service hooks in the general section
find and click on service hooks create subscription initiate the creation process by clicking the plus create
subscription button select notification service pick the service for notifications like Microsoft teams or
slack and set the event trigger to Bill completed set trigger filters customize the trigger filters by setting the build
status to succeeded configure action details specify the notification message and destination such as the recipient
Channel and slack or an email address finalize and test save the service hook with the Finish button and conduct a
test to confirm it operates as expected after a build is successful so that's an overview of workflow
covering the different types of Branch strategies starting with trunk based development or TBD TBD employs a single
Central Branch known as the trunk or Master focusing on frequent small updates for continuous integration and
stability steps to implement TBD why establish the trunk Define a single Branch also known as the trunk as the
central code path two direct commits encourage team members to commit small changes directly to the trunk frequently
three continuous integration per perform builds and tests on the trunk often to catch issues early for automate
deployment set up automatic deployment to streamline updates here's an example of how you can create a trunk Branch
using git git Branch trunk get checkout trunk another Branch strategy you can use as feature branches which enable
developers to work independently on new features or fixes keeping changes separate from the main code until
they're ready to merge this approach allows for focused development and testing of specific functionalities
without disruption steps to use feature branches why create feature Branch initiate a new Branch for each feature
or fix to naming conventions assign descriptive names reflecting each Branch's purpose three stay updated
merge updates from the main branch periodically for thorough testing conduct extensive tests before merging
back to the main branch here's an example of how you can create and switch to a feature Branch using get hit Branch
Feature slne Feature get checkout feature SL new feature the last Branch strategy we'll
be covering is release branches which help prepare and stabilize a codebase for a new release focusing on bud fixes
and final adjustments they are created from the main branch enabling ongoing development while ensuring the upcoming
releas is Thoroughly tested and Polished steps for managing release branches while create a release Branch start a
branch from the main branch for new releases two Focus adjustments make all necessary tweaks and Bug fixes on this
Branch three three ensure stability test thoroughly and maintain continuous integration four final merge merge the
release Branch back into the main branch once ready here's an example of how you can create a release Branch using get
key branching strategies that will be covered on the Azure devops exam call I are critical for maintaining
code quality and ensuring that changes meet certain standards before they are merged Branch policies in Microsoft
devop Solutions or rules that govern how code is contributed to a repository they enforce certain conditions that must be
met for poll requests to be merged ensuring that code is reviewed tested and linked to relevant project tasks key
Branch policies to implement require approving reviews mandate that each poll request receives at least one approving
review from designated reviewers before it can be completed this guarantees that all code changes are scrutinized by
another developer promoting better code quality and reducing the risk of Errors link work items ensure that every poll
request is associated with a corresponding work item this linkage provides traceability and accountability
making it easier to track why changes were made in ensuring they align with the Project's goals build validation
configure this policy to require that changes in a poll request successfully pass automated builds and tests this
helps to identify any compilation issues or test failures early prevent preventing problematic code from
reaching the production environment additional Branch policies for enhanced workflow en Force minimum
review time set a minimum period that poll requests must remain open before they can be merged this policy prevents
Rush reviews and ensures thorough evaluation requir task completion mandate the completion of specific tasks
before merging such as addressing all code comments or updating necessary documentation this ensures that all
critical aspects are handled before integration automate code for forting in style checks Implement tools like
linters to automatically enforce coding standards this minimizes manual review efforts and maintains consistent code
quality benefits of using Branch policies improve code quality automated checks and enforce review standards
significantly reduce the risk of introducing bugs and errors better team collaboration requiring reviews and
linking work items promotes effective collaboration and keeps team members aligned with project goals efficient
workflow management automating parts of the review process accelerates the development cycle while upholding high
covering is Branch Protections in Azure Dev Ops Branch protections provide an additional layer of security by
enforcing rules on Branch manipulation preventing accidental modifications or unauthorized changes here are some of
the key Branch protections to implement require a minimum number of reviewers set branches to require a specific
number of reviewers for all changes this ensures multiple evaluations of the code which facilitates collaboration and
reduces the risk of defects restrict who can push to the branch limit direct push access to protected branches allowing
only authorized individuals or teams to make changes this control helps prevent unauthorized modifications and maintains
code integrity and force merge checks specify criteria that must be met before merging a poll request these include
build validation work item linking and Branch permissions compliance to ensure only approved changes merge P request
workflow with Branch policies and protections here's how you can structure a typical pull request workflow using
both Branch policies and protections create a feature Branch developers Branch off the main branch to work on
new features or fixes Implement changes and create a poll request developers commit changes to their branch and open
a pull request to merge them into the main branch assign reviewers and await feedback reviewers and inspect the code
provide feedback and approve Branch policies and sheer pull requests need the required approvals to proceed
address feedback and iterate developers respond to feedback update their code and Trigger the build validation process
reviewers reassess the updated changes complete the pull request after security approvals and passing merge
checks like work item linkage and build validation the pull request is completed and changes are merged so that's an
hey this is Andrew Brown from exam Pro and in this section we'll be covering Azure pipelines Azure pipelines is a
cloud service that automates the CI CD pipeline for software development offering support for multiple languages
platforms and Cloud environments and integrating with a wide range of tools and services here are some of the key
features of azure pipelines automation for CI CD Azure pipelines provides a fully featured continuous integration
and continuous delivery service for applications platform and language agnostic supports any language platform
and Cloud that integrates with Azure o and gcp extensibility offers integration with popular tools and services in the
software development ecosystem supports open source and private projects available for projects hosted on GitHub
and other platforms Rich integration integrates with GitHub checks and offers extensive reporting capabilities
parallel jobs and environments allows running multiple jobs in parallel and deploying to multiple environments
including kubernetes VMS and Azure Services next we'll take a look at defining your pipeline yaml syntax as
your pipelines uses yaml syntax to Define build test and deployment tasks step-by-step process the documentation
Guides Through the process of setting up your first pipeline including initiating builds packaging applications and
deploying some of the key Concepts include pipelines a complete CI CD pipeline defined by stages jobs steps
and tasks stages a way to organize jobs typically used to separate build test and deploy processes jobs and steps jobs
group steps which are individual tasks like scripts or Azure pipeline tasks and tasks prepackaged scripts that perform
support works with any language including net Java JavaScript node.js python PHP P Ruby C C++ and more
framework and platform support supports Windows Linux and maos builds can deploy to various platforms including Azure
kubernetes VMS and on premises servers extensibility Marketplace extensions a rich Marketplace of extensions to extend
the functionality of azure pipelines custom tasks developers can create custom tasks to meet unique requirements
pricing free tiers available offers free CI CD minutes to projects with additional minutes available for
purchase pricing varies based on Parallel job needs and Cloud providers so that's an overview of azure
pipelines the next topic we'll be covering is GitHub repos with Azure pipelines integrating GitHub
repositories with Azure pipelines can significantly enhance the automation of build and release processes within a
devops workflow this integration facilitates continuous integration and delivery promoting faster and more
reliable applic a deployments here's how to structure the setup process step one create a new project in Azure devops
create project click the new project button provide the necessary details for your project such as name and
description and then click on the create button to finalize the project creation step two connect your GitHub repository
to Azure pipelines access pipelines in your new Azure devops project navigate to the pipeline section initialize
pipeline click on the new pipeline button select Source choose GitHub as the source for your pipeline you will
need to authenticate and authorize aure pipelines to interact with your GitHub account configure pipeline select this
specific Branch or repository to build and deploy customize the pipeline settings based on your application
requirements activate after configuring click on the save and run button this action saves your pipeline configuration
and triggers the initial build step three build and deploy your application build configuration as your
pipelines will execute the build of your application according to the directives specified in your pipeline configuration
file which is usually a yl file this may include tasks like compiling code running tests and packaging artifacts
release tasks post build configure the pipeline to deploy your application to various environments such as staging or
production this can involve deploying to services like Azure app service or Azure kubernets service deployment
customization utilize features like environment variables secrets and approvals to tailor and secure your
integration ensures that any changes in the connected GitHub repository trigger automatic builds keeping your
application updated and validated after every commit code visibility enhances traceability by linking GitHub pull
requests and commits directly to their respective build and release pipelines artifact management facilitates
management and storage of build artifacts in Azure pipelines or with external services such as Azure
artifacts and Docker Registries continuous delivery automates the deployment process across different
environments minimizing manual intervention and promoting consistent releases release approvals implements
controls and checks through configurable approvals before promoting bills to production environments so that's an
overview of GitHub repos with Azure pipelines the next thing we'll be covering is configuring the permissions
and Source control repo managing access ACC and security through permissions is critical in a devops environment to
ensure that team members have appropriate access for their roles configuring repository level permissions
one Access Project and as your devops navigate to your project and go to the repo section to repository settings
select your repository and click on the settings tab then repositories in the submenu three modify security choose the
repository you want to adjust and click on the security tab on the right for manage access at user users or groups to
the predefined security groups or create new ones five set permissions use the ad button to Grant the appropriate
permissions like read contribute and administer to selected users or groups Branch level permissions configuration
widen repository access open your repository in Azure Dev ops two branches go to the branches Tab and choose the
branch you need to configure three security settings click on security to manage permissions for that Branch for
Define permissions assign permissions to groups or users overriding repository level permissions if necessary
configuring file level permissions why locate file within the repository navigate to the specific file you want
to manage two file permissions click the three dot icon next to the file and choose manage permissions three control
access at groups or users and set the desired access levels just as you would at the repository or Branch level so
the next topic will be going over our tags and Source control repos tags and get serve as reference points to
specific commits making it easier to manage and track different versions of code in a repository step one repository
access log into Azure devops or another Microsoft devops Solutions platform verify you have the required repository
permissions step two navigate to the repository after logging in locate the repositories or Source control tab on
your dashboard this section contains all the repositories you have access to within the platform step three create a
tag identify the commit in the repository that you wish to tag tags are useful for marking releases or important
points in the Project's history select the desired commit from the commit history look for an option label create
tag or add tag usually available in the commits context menu step four provide tag details in
the tag creation dialogue enter the tag name a unique identif fire for the tag provide a description a brief note about
what this tag represents add any annotations or other metadata if necessary step five save the tag confirm
the details and save the tag this action attaches the tag to your specified commit marking it for future reference
step six View and manage tags after creating a tag you can view and manage it through the repository interface
access the tag section where all configured tags are listed here you can rename delete or reassign tags to
different commits if required so that's an overview of tags and Source control repos the next thing we'll be covering
is recovering data using git commands git is vital for Version Control and teamwork but sometimes mistakes occur
leading to data loss or overwrites it's important to know how to restore data using get commands in these situations
examining commit history check git log use the command git log to see a list of recent commits made in the Repository
this log includes the commit hash author date and the commit message to find a specific commit or to filter the log by
author date or content you can use git log author equals username git log since equals 2023
0401 git log GP equals keyword this helps in locating the exact commit hash of the changes you wish to restore
revert a specific commit use get revert commit underscore hash to undo the changes made in a specific commit while
preserving the history of changes this command creates a new commit that reverses the changes introduced by the
specified commit it's a safe way to undo changes as it doesn't alter the existing history for example get revert 1 a23 C4
recovering deleted commits use rlaw to recover lost commits if you've accidentally deleted or lost commits get
reflog can be a lifesaver it shows a log of where you're head and Branch references have been which includes
deleted or orphan commits you can find the commit hash of a loss commit and recover it by creating a new Branch from
it hit reflog hit Branch restore Branch Commit This restores the deleted commit in a new branch called restore Branch
allowing you to access the previously lost changes restoring deleted files restore deleted files if you've deleted
a file and want to recover it from history use get checkout commit uncore file uncore path this command restores
the file as it existed at the specified commit it's useful for quickly recovering loss work without affecting
commands the next topic we'll be going over is purging data from Source control to optimize your Source control system
within Microsoft devops environments regular purging of unnecessary data is crucial this helps in improving system
performance and reducing clutter prerequisite checks communicate with Team ensure all team members are
informed of The Purge to avoid any disruption backup data confirm that backups are in place for all critical
dat data data replication check ensure essential data is replicated to Alternative repositories or backup
systems Azure devops rest API for data purging automate with API use the Azure devops rest API for automated data
deletion tasks delete files folders branches modify the delete API call is needed to Target and remove specific
items from the repository manual get garbage collection trigger garbage collection run get GC in
your local repository to start manual garbage collection non-production timing perform this operation when it won't
interfere with development activities get history compression access repository settings in Azure Dev Ops go
to the repository settings to find G configuration options enable compression check the option for compress kit
history to optimize storage removing unnecessary branches local branch deletion use git Branch D
Branch name to remove branches from your local machine Azure devops branch removal use the azure devops web
interface to locate and delete old or unused branches implementing data retention policies Define policies set
up rules for how long data should be retained in Azure Dev Ops automate purging configure automatic removal of
age data to streamline repository maintenance so that's a brief summary of purging data from Source
covering integrating pipelines with external starting with dependency scanning dependency scanning helps you
track and manage the libraries and packages your codebase depends on this ensures that your applications use the
appropriate versions of these dependencies minimizing the risk of compatibility problems select from
various tools like oasp dependency check or retire. JS which help identify vulnerabilities and outdated libraries
in your projects dependencies choose one that best fits your Project's requirements configure the pipeline
integrate your selected dependency scan tool into Azure pipelines by setting up a task to run the scan before the build
or deployment phases to identify vulnerabilities at the earliest analyze the results post scan analyze the
results to identify and prioritize vulnerabilities or outdated dependencies highlighted in the scam report address
these issues to maintain the health and security of your application detection process each
change in the dependency graph or after code build initiates a new snapshot of your components vulnerable components
are logged and displayed as alerts in the advanc security tab based on advisories from the GitHub advisory
database these logs detail the severity component vulnerability title and cve managing alerts the advanced security
tab serves as a central hub for viewing and managing dependency alerts it allows filtering by Branch Pipeline and
severity and provides remediation steps alerts for dependency scans on PR branches are shown and any name changes
to pipelines or branches might take up to a day to reflect in the results so that's an overview of dependent
scanning the next topic we'll be covering is security scanning integrating security scanning tools into
your pipelines is a critical step to identify and address security vulnerabilities in your code there are
many security scanning tools available like sodar queet and Microsoft Defender for cloud these tools analyze your
codebase for security flaws coding standards violations and potential vulnerabilities choose the tool that
meets your Project's requirements sod queet is an open-source platform for continuous inspection of code quality it
performs automatic reviews to detect bugs vulnerabilities and cod smells in your code Microsoft Defender for cloud
formerly known as Microsoft Defender app offers Security Management and advanced threat Protection Services across hybrid
by adding the required tasks into your pipeline configuration for example in Azure pipelines you can add a task that
triggers the security scan during the build phase automating the security audit evaluate scam results after the
security scan is finished review the tools report it will highlight security gaps and code quality concerns address
these items promptly giving priority to the most critical issues to fortify your codebases security posture so that's a
quick overview of security scanning the next topic we'll be going over is code coverage code coverage
measures the percentage of your codebase tested by automated tests revealing how much code is exercised during testing to
ensure quality and detect uncovered areas select a code coverage tool select a tool like jaac for Java or Cobra to
for net ensure it integrates well with your Tech stack and test Frameworks configure the pipeline incorporate your
chosen tool into the pipeline to collect code coverage metrics during tests for example using Azure pipelines you can
jaac analyze code coverage results review the coverage report post analysis to identify and improve areas with low
test coverage enhancing your codes robustness accessing coverage artifacts publish code coverage artifacts can be
viewed in the pipeline run summary under the summary tab offering a snapshot of test coverage for each build quality
metrics enforcement code quality assurance leverage code coverage metrics to continuously elevate your Project's
quality and verify the extent of testing for new code pull request integration Implement coverage data within pull
requests to ensure thorough testing and preemptively fill testing voids before integration setting code coverage
policies full versus diff coverage full coverage measures the total code basis test coverage ensuring overall quality
diff coverage focuses on the code changes in pull requests ensuring new or altered lines are tested so that's an
quality Gates a quality gate acts as a benchmark for code quality that must be met prior to release and ideally before
the code is committed to Source control it ensures that only code that meets established standards progresses through
the development pipeline features of quality Gates automated code analysis tools like sonar kuet are integrated
into Azure pipelines to perform static code analysis identifying potential issues such as code smells
vulnerabilities and bugs performance metrics code quality metrics including code coverage complexity and
maintainability index are assessed compliance checks Gates ensure the code complies with security standards and
quality Gates customization customize gate criteria to align with project demands and application type threshold
setting set clear thresholds for code coverage to Define pass fail conditions for the gate such as minimum code
coverage percentage feedback loop establish immediate feedback systems for developers upon gate failure for prompt
issue resolution integrating quality gates with Azure pipelines hypine configuration integrate quality checks
in your CI CD flow to control code progression using established metrics action on failure defined actions for
when code fails to meet the quality Gates criteria which may include halting the pipeline triggering alerts or
creating tasks for remediation visibility and reporting increased transparency through dashboards or
reports showing gate outcomes for ongoing Cod based Health monitoring so that's an overview of quality
Gates the next type of gates we'll be covering are security and governance Gates security gates are established to
verify that code complies with security protocols and is free of vulnerabilities before being pushed to production static
application security testing integrate a sast tool like sonar queet and white Source bolt into the build process to
scan for security flaws this proactive approach to Tex issues early reducing the risk of vulnerabilities reaching the
production environment Dynamic application security testing conduct D regularly on live applications to find
security weaknesses use tools like Azure web application firewall to guard against common threats like xss and SQL
injection governance gates are checkpoints to confirm that both code and deployment procedures are in line
with company policies and Industry regulations policy definition and enforcement identify and set governance
policies required by your organization apply these policies using tools such as Azure policy to automatically enforce
them throughout the development cycle automated compliance verification build automated compliance checks into your CI
CD Pipeline with Azure devops compliance or similar tools automating these checks ensures ongoing adherence to governance
standards without manual oversight so that's an overview of security and governance Gates
what are pipelines in devops a pipeline is a key framework that structures the software delivery process through
automated steps it ensures each phase of the software life cycle from integration to deployment is optimized for quick
development reliable releases and scalability it encompasses several components and features components of
devops pipelines while in source code repository the starting point where code is stored in Version Control
two build server automates the compilation building and preliminary testing of code three test server runs
various tests such as unit or integration to ensure code quality four deployment server manages the deployment
of code to various environments such as staging or production five feedback and monitoring tools that provide feedback
on deployment success and monitor application performance and production features of devops pipelines automation
every stage from code commit to production is automated minimizing manual tasks and errors continuous
integration and deployment ensures that changes to software are automatically tested and deployed improving speed and
quality modularity each component functions independently but collaboratively allowing for easier
troubleshooting and updates benefits of Dev Ops pipelines increased efficiency automation reduces the delivery Cycles
enabling faster releases improved reliability continuous integration and testing diminish the chances of defects
in production better scalability pipelines support scalable operations and management practices as
organizational needs grow so that's an overview of pipelines the next topic we'll be
covering is integrating automated testing into pipelines automated testing plays a vital role in ensuring the
quality and reliability of software products by incorporating automated tests into the pipeline you can detect
issues early in the development cycle streamline the release process and Achieve faster time to Market key steps
for integration Define test strategy outline what types of tests such as unit integration UI will be automated and set
the coverage criteria create test infrastructure use Azure Dev Ops for provisioning resources like VMS or
containers or utilize Azure test plans for executing tests choose a test framework depending on your text stack
select an appropriate framework like Ms test nunit or xunit write automated tests develop tests that address various
functional and integration aspects of your application for example in a shopping cart application you could
write a test to ensure items are added correctly the example shows a unit test in C using Ms test to verify that a
product when added to a shopping cart is correctly included this test checks the functionality of the add to cart feature
essential for e-commerce applications by asserting that the cart contains the added product Version Control manage
your test scripts and cbase in a get repository using Azure Dev Ops for Integrations like pull requests and
reviews configure CI pipeline set up a CI pipeline in Azure pipelines to automatically run tests upon commits
helping identify issues early incorporate test reporting utilize Azure Dev Ops for detailed test reporting and
tracking over time Implement CD pipeline after passing tests in CI deploy your application across different
environments using a CD pipeline so that's an overview of automated tests into Pipelines
written by developers to test individual components or modules before they are integrated into the larger system they
ensure that each piece of code functions correctly in isolation there are various tools and Frameworks available for
writing local tests such as Ms test nunit or xunit for example using Ms test you might test a method in a class that
calculates the sum of two numbers the test checks whether the method Returns the correct result result when adding
two integers unit tests focus on validating the functionality of individual methods or classes in
isolation identifying bugs and ensuring code behaves as intended Microsoft devop solution supports Ms test and unit and
executed for these tests unit tests are narrowly focused while local tests can be broader or refer to the environment
in which a variety of tests are performed using nunit consider a test for a service that retrieves a list of
items the test verifies that the list is not empty and contains the expect number of items integration tests assess the
interaction between two or more components of the application to ensure they work together as expected these are
important for catching issues that unit tests might miss using nunit an integration test could check the
interaction between two Services where one service uses data provided by another load tests evaluate the
performance of the system under a significant load which could be simulated users or transactions they
help to identify the capacity limits and scalability of the application in the low test scenario in Azure Dev Ops you
could simulate multiple users accessing a service to test its performance and capacity so that's an overview of the
main types of testing strategies another type of testing is the UI testing UI testing is critical in
Microsoft devops for ensuring that the user interface of applications functions correctly and meets desired requirements
this testing confirms the UI functionality and behavior identifying early issues with user interactions
layout responsiveness and data management bug detection regular UI testing identifies bugs and errors early
improving user experience quality verification this testing confirms that the UI meets functional requirements and
performs as expected under various conditions Microsoft offers several tools that streamline UI testing in
devops environments first let's go over Microsoft test manager Microsoft test manager supports extensive UI testing
with capabilities tailored for managing test cases and tracking their execution setting up Begin by creating a new test
plan in Sweden MTM test cases add UI test cases to your Suite execution and Analysis use mtm's test Runner to
perform the tests and review results including screen shots and videos another one is Visual Studio coded UI
tests Visual Studio coded UI tests provide a code Centric approach to UI testing suitable for automation using C
or visual basic.net create a test project start a new project and add a coded UI test record and enhance
interact with your applications UI to record actions then add validation statements execution run your tests
locally or integrate them into your devops pipeline for continuous testing selenium web driver with C selenium web
driver is an open source framework ideal for automating web browsers and conducting cross-platform UI tests set
up install selenium web driver via the new get package in your Visual Studio solution create and configure start a
new C test project and set up selenium web driver develop and run tests write test methods to interact with the UI and
directly integrated with GitHub repos and GitHub actions allows you to automate running test Suites Building
images specifically docker images compiling static sites deploying code to servers and more and we can access this
all through the actions tab in your GitHub repo uh when you first use GitHub actions there are some templates that
you can utilize and all these files are stored in your workflow directory in your GitHub folder as you can see
there's this kind of yaml file that we're going to utilize it's going to be important that we remember some of the
structure so I remember on the exam they wanted you to know that there was jobs on and steps uh you can have multiple
workflows in a repo triggered by different events so when you run GitHub actions you'll get a history of workflow
of runs where it will indicate if it was successful or failure how long it took to run so this is um for something
probably for um some one of the boot camps that we ran as we used GitHub actions to build the site if you want to
find the example repos because there are those little getting started but it's the same repo here at the starter
workflows and you can get the yaml files and get started really quickly there are different types of triggers uh that you
can use with GitHub actions that's going to go into that on area and GitHub there's about 35 GitHub actions I say
plus because in case there are more that I'm not aware of I'm covering my bases there examples of common GitHub actions
could be pushes pull requests issues releases schedule events and manual triggers the exam wants you to know that
you can trigger based on these things so make sure you remember this short little list here
GitHub actions so what I want to do is go over to our organization I'm going to go to our fund repo and here we have a
tab called actions where we can set up some GitHub actions now I don't use GitHub actions a whole lot maybe I will
if I go ahead and do that GitHub action cert uh certification course but uh we'll just have to kind of work our way
through through it it shouldn't be too hard so there's a lot of different things that we can utilize here I can
set up terraform which is kind of cool uh we have deployment to AWS to Azure to Alibaba Cloud we have some security
things that we can do here we have continuous integration automation a whole host of stuff and um you can even
do static compilation which is pretty cool as well so we're going to have to make make a decision in terms of
something that we want to use um I don't I think it's going to be deployment because that seems like a lot of work um
so I'm trying to make a decision here what we could utilize build lint and test a rails
application that seems pretty small or easy for me to do but maybe we should try to use one of these things down
below um labels pull requests based on files change let's go ahead and see if we can utilize that so I hit the
configure button and it's going to bring us into this action and I guess there's the marketplace
where we can get uh more stuff I didn't even realize there was a Marketplace for this looks like we got some
documentation here so customizing when workflows run based on trigger so here it says on push Branch so when we push
take a look here and see if we can expand this and see what we're looking at so we have name labeler on pull
request Target jobs and then we have labels so run as Ubuntu so probably start a container as an Ubuntu with
permissions of contents to read and it can write pull requests and the step that we're going to have here it's going
to use the actions labeler version 4 and with the repo token it's going to bring in that token so it has authorization to
do so and we know about GitHub tokens um and I think I think we showed this uh for GitHub actions but not 100% sure so
carefully reading here the whole point of this is to apply labelers and the basically the way it's going to do that
is through these steps and so I'm assuming that this is kind of like a built-in step and if we go here and just
actions all right and so it's coming from this one and so I'm just going to go into here and take a look at what
works um automatically labels new pull requests based on the paths of the files being changed on the branch the ability
to apply labels based on names of branches and things like that the bug related to the issue so create a GitHub
labeler yaml file this will list a uh a list of labels and config options to match and apply the label all right the
complicated but at least down here it's like showing us uh the flow it's interesting that this one is showing
version four we clearly there is a version five um there might have been warning up here saying that we should
five um but let's see if we can figure this out so I'm going to go ahead and go down below and just look at the workflow
a little bit more here and it looks pretty much the same the only difference is that this one's using version five
it's not passing the width for the token so I'm not 100% sure if we actually really need to do that but I'm going to
go ahead and go commit changes and we'll commit that there okay so now we have that here if we go to
actions um we can see that we have labeler and it's only going to run what's going to trigger it let's go take
here um pull request Target so activity types assigned assigned labeled runs your workflow when activity
in the workflow activity occurs so it's going to check on based on a lot of stuff so that's pretty broad but that
seems fine looks like we could even narrow it down to very specific types okay so I mean that's is how we could
play with triggers um and so basically when that triggers it's going to go ahead and then
start up Ubuntu for some reason maybe that's what it has to use to run this code that we saw from over
file so we need to make sense of what this is so the base match object is defined as this any glob to any
file okay the key okay so what does this thing do automatically label new pull request based
on the path of the files being changed on uh changed or the branch name okay well I'd rather just do that
files give me a moment just to try to make sense of this and then I'll just save you the trouble of me struggling
through it okay okay scrolling down we're getting better examples this is starting to make more sense it says add
an any change label to any changes within the entire repository Okay add documentation label
to any changes within the doc folder so maybe what we can do is give this one a go and uh we need to create a file
what's this file need to be called GitHub laer yaml so we'll go here and I'm going to add a new
label. yaml we'll double check make sure that is correct I've been known to make mistakes
often and I'm going to paste that on in here and we're going to commit that change so
now we have our labeler yaml I'm going to go take a look here and see if the action got triggered it did um it failed
I mean there's nothing for it to check right now so no event triggers defined is on so not exactly sure what it's
saying there but that's totally fine for now and what I need to do is I need to create a poll request I mean it
shouldn't trigger unless we have a poll request right um but it did just happened now so I'm curious what would
trigger unless it's a pull request we'll go here and go back to actions and it ran again so I'm really confused why is
this running what it should only happen on a poll request Target um we'll open it up again here
so no event trigger defin is on okay maybe there's something wrong with our our workflow
file let's see what they say you can trigger all branches just by using remove the hyphen your workflow
file seems fine have you checked all indentation so I mean we didn't make that file right it was generated for us
okay so this was the one that we wrote I mean everything looks fine here I'm going to open this up in codes
um I don't know if we need this so I'm just going to take that out because the other one didn't have
and I'm not sure why this little red line is here maybe it's just superficial it's confused but this seems fine I'm
going to go ahead and update this and say update action the other question is do I have this in
yaml labeler yaml isn't supposed to be in the actions folder so I think what's happening here is that it thinks I think
action yeah so I think it's just we put that in the wrong folder so I'm going to go back and open up
and we're just going to move that back into the correct location okay so we're going to expand
that and this labeler is going to go into the GitHub folder there we go going to go ahead and add this all and
here all right and if we go back over to actions oh uh this is not the repo we'll go back over to our organization into
Fun repo it might have triggered one more time I don't know it has not so we are in good shape
can we delete this run y we can let's just delete this up to clean up so we can see what we're
doing and uh what I want to do is I want to trigger that uh polar request to get automatically labeled so
we're going to need a label called documentation for for this to work so we're going to go here to labels and
we'll make a new label there actually already is one called documentation I don't know if this is case sensitive so
I'm just going to change it to Capital D so it just works for us and we're going to go over here to code and I need a new
a um a docs directory so I'm going to make a new folder here and what I'll do do is I'll make
read me okay we'll save that we'll go ahead and Commit This and we're in a branch right yeah
we're in a branch so commit we're like new docs directory and I know I spelled that wrong it's
okay nobody's watching here today there's no grading going on if I ever grade you I'll poke you for that but
right now it doesn't matter and I think that we made that in the dev Branch so we'll go over here and I want to go
ahead and create a new pull request so want to make sure that folder is there pull requests new pull requests we'll
drop down Dev we'll compare that over we'll say create pull request and the idea is that when we create this
it should label it if this worked as expected um and so what I want to do is go over to actions and see if it
triggered and it's running so it's queued it's going to think what to do it has to spin up compute so this is an
instantaneous in progress good we're watching it label complete success we'll go to
passed and we overo to our checks and it shows that it passes so you know before we talked about like Branch rules we
could maybe tell it that it has to pass that before it could proceed not a really good example for this but we
could try it and it's just an opportunity to show off this Branch protection rule stuff so I'm just
looking here carefully for where that was uh it was like checks okay and I'm going to drop that
down and still doesn't show up here so maybe that doesn't work as expected but I was
upfront check it's like something that happens after you do that but anyway that's get up actions in a nutshell um
it is very important that we understand how those files work so before I go I just want to uh pull up a link because
there was something that really explained the structure of these files really well it was understanding GitHub
actions okay so let this here learn GitHub actions understanding GitHub actions I think it was this one yeah and
so this one really helps explain this workflow file so let's go through it really quickly and make sure we
understand it so um first thing is the name so we're going to name it that's optional we have
run name so the workflow uh runs that generated from the workflow I guess it's going to be like the Run name we have on
so specify the trigger of this workflow so on triggers events jobs groups together all the jobs that
run then we have steps groups together all the steps that run uh in the check bats version notice we didn't have these
before runs on configures the run on the latest version of Ubuntu Linux Runner I imagine you could change this to other
things uses the uses keyword specifies the step that will run uh here and of course we found out those
are remote repos so that makes sense um and that's pretty much it so so steps jobs on remember those three on jobs and
over package management package management refers to a systematic approach to handling the installation
upgrade configuration and removal of software packages within a computer system it simplifies the
process of managing software applications and their dependencies ensuring consistency and efficiency
across the development life cycle and system maintenance core functions and benefits automated handling automates
the management of software applications reducing time and effort for installation upgrades and removal
consistency across environments ensures uniform software management across various environments boosting efficiency
and reliability dependency and configuration management automates management of dependencies and
configurations ensuring compatibility and availability for stable performance scalability facilitates software
management across multiple systems easing updates and rollbacks he components package a bundle
containing software or libraries along with metadata that includes information like version dependencies and
configuration details repository a centralized storage location where packages are hosted allowing users to
search download and install packages package manager the tool that interfaces with repositories to manage the
installation upgrading and removal of packages based on dependencies and version requirements some tools and
examples include liux package managers dpkg such as Debian and auntu RPM such as red hat and Fedora language specific
managers and PM for JavaScript tip for Python and Maven for Java so that's an overview of package
management the next topic will be covering our package feeds at package feed is a repository hosting software
packages such as libraries Frameworks modules along with Associated metadata it supports dependency management and
various application scenarios through package versioning and organization types of package feeds public feeds
hosted by thirdparty providers like net.org and pjs.com and accessible to the broader development Community
private feeds internal repositories managed by organizations to store proprietary packages and control team
access designing a package feed key considerations storage select from local file systems network attached storage or
cloudbased Services organizational structure categorize packages by type purpose or technology versioning
Implement a versioning strategy typically semantic versioning AIS control set up authentication and
authorization me mechanisms for private feed security implementing a package feed tools and platforms Azure artifacts
a fully managed Microsoft Azure service supporting new get npm mavin and python packages it simplifies the creation
publication and management of package feeds GitHub packages supports various package formats integrates with GitHub
repositories and allows direct package publishing ideal for open source projects package management tools tools
like new get npm and mavin that offer capability ities to create in host feeds with command line or ID
integration using Upstream sources functionality Upstream sources extend the range of available packages by
linking additional feeds whether public private or both in Azure artifacts Upstream sources can include other Azure
feeds package Registries or public feeds this ensures that the latest package versions are always available
configuration configurable via the Azure devops portal or Azure CLI allowing Azure artifacts to Upstream sources for
feeds let's take a look at dependency management dependency management automates the handling of software
dependencies to ensure projects run smoothly with all necessary external libraries Frameworks and modules key
components dependencies required external software components version specification defines compatible
dependency versions dependency graph shows relationships among dependencies package repository Central hub for
dependencies dependency resolver automates dependency resolution benefits streamline development automates
environment setup by managing dependencies consistent builds ensures uniform dependency versions across all
development stages reduce conflicts manages compatibility to prevent software component conflicts efficient
specify dependencies using files like package.json or pom.xml version locking uses lock files to maintain consistent
dependency versions automated tooling tools like npm Maven and pip automate dependency management tasks let's take a
look at a comparison between dependency versus package management purpose dependency management ensures
compatibility and resolves dependencies while package management handles package life cycle so Fus dependency management
targets component compatibility package management manages package storage and life cycle main function dependency
management automates resolution package management oversees handling and distribution tools both uspm and mavin
dependency management also uses pip while package management incorporates new get in others outcome dependency
management maintains functionality package management streamlines processes usage scenario dependency management is
essential for external Library Reliance package management is crucial for consistent management so that's an
overview of dependency management the next thing we'll be covering is an overview of azure
artifact Azure artifacts is a component of azure devop Services focused on package management and collaboration it
supports sharing versioning and integrating packages into CI CD workflows some of the features that
Azure artifacts provide are package management supports multiple package formats manages new get npm Maven Python
and Universal packages all in one place Version Control offers tools for managing package versions and
dependencies effectively integration and collaboration cic CD integration integrates with Azure devops pipelines
for streamline package creation and deployment shared feeds allows package sharing within teams or the entire
organization access control and security access control offers settings to control package access Main maintaining
security within projects secure hosting provides a secure environment for hosting and accessing packages getting
started with Azure artifacts and Azure subscription is required to use Azure artifacts creating a feed navigate to
your Azure devops organization or project select artifacts from the top right menu then create feed enter the
required information such as name and visibility options and create the feed after creating a feed you can publish
packages like new get mpm and mavic but we'll need to do a few things before that so first we'll need to ensure you
have the latest version of the Azure artifacts credential provider installed from the get the tools menu new get
configuration file you need to add a net config file to your project in the same folder as your CS project or HL in file
the configuration should specify the package source for Azure artifacts like the code shown here this will be
provided for you depending on the package you pick publishing command to publish a package provide the package
path an API key and the feed URL net.exe push Das Source exam Pro feed API ke a z package path replace package path with
artifacts the next topic we'll be covering is new get and mpm new get is a popular package manager for net
applications it allows you to easily manage dependencies and distribute packages within your projects create
creating a new get package use the net.exe or net command line tool example using net navigate to your project
directory and Run net pac-- configuration release this generates a new get package from your project in the
release configuration publishing new get packages you can publish to Azure artifacts GitHub packages or host your
own new get feed using platforms like Azure devop server my get or pret for private distribution and PM is the
default package manager for note. JS and JavaScript applications it provides a vast ecosystem of packages that
developers can use within their projects creating an mpm package initialize a new project with npm in it follow the
prompts to generate a package. JSO and file that includes your package metadata publishing npm packages to publish to
the npm registry ensuring your package is available globally run npm publish this command uploads your package to the
npm registry for Public Access so so that's a short and simplified overview of new get and
npm the next thing we'll be covering are the main types of versioning versioning in devops refers to the practice of
assigning unique versions to code or artifacts to track changes and maintain historical records versioning strategy
options semantic versioning detailed structure indicating the nature of changes datebase versioning uses dates
offering a chronological timeline of updates sequential version simple incrementation providing clear change
order semantic versioning is a widely used method that helps manage changes in software libraries through three version
components major minor and Patch major version increases indicate backward and compatible changes minor version
increases are for Backward Compatible additions patch version increases apply to Backward Compatible bug fixes
guidelines for semantic versioning initial release start with one for stable software major changes increment
major version for breaking changes Miner editions increment Miner version for new features that maintain compatibility bug
fixes increment patch version for stability improvements without new features or breaking changes datebase
versioning uses the format YY yymmdd to reflect release dates offering a clear timeline of updates consistency maintain
a standardized date format major changes Mark significant updates in the version name EG 2022 0114 Alpha
combination with sver enhanced detail by combining with semantic versioning EG 2022 0114 alpha 1 2 3 sequential
versioning assigns a unique sequential number to each version of a pipeline artifact starting with version one each
update increments the version number progressively it's straightforward and clearly indicates the chronological
hey this is Andrew Brown from exam Pro and in this section we'll be covering the key considerations when implementing
an agency infrastructure designing an implementing an agent infrastructure is crucial for a successful devop solution
key considerations include cost tool selection licenses conductivity and maintainability cost considerations
Azure pipelines agents hosted by Microsoft cost effective build base on Parallel pipelines self-hosted agents
more control but additional costs for Hardware maintenance and scalability tool selection Azure pipelines automates
build test and deployment across platforms supports various languages containerization with Docker Visual
Studio team Services predecessor of azure devops integrates with Azure pipelines features Source control work
item management and project planning licenses Azure pipelines free tier with limited concurrent pipelines and
duration paid tier for more scalability self-hosted agents requires licenses for the underlying operating system such as
Windows server or Azure VMS connectivity Azure pipelines agents secure internet communication using https fetches source
code executes tasks and reports results self-hosted agents network access to resources like Source control
repositories artifact feeds and Target environments within your infrastructure maintainability aure pipelines agents
automatically updated by Microsoft no manual effort required self-hosted agents regular updates needed for
compatibility with guidance provided by Microsoft so that's an overview of implementing an agency
infrastructure the next thing we'll be going through are pipeline trigger rules pipeline trigger rules Define conditions
under which a pipeline is automatically triggered optimizing resource usage and reducing unnecessary builds and
deployments Microsoft devop Solutions offer flexibility in designing custom trigger rules here are key scenarios
branch-based trigger trigger a pipeline only when changes are made to a specific Branch this triggers the pipeline for
changes in the main branch path-based trigger trigger a pipeline when changes occur within specific file paths this
triggers the pipeline for changes in the SRC directory schedule base trigger run pipelines at scheduled intervals
regardless of code changes this triggers the pipeline every day at midnight implementing pipeline trigger rules
navigate to your project and Azure Dev Ops open the yaml file that defin finds your pipeline locate the trigger section
within the yaml file Define the desired trigger rules based on the scenarios above or any custom rule save the yml
file benefits of pipeline trigger rules reduce resource consumption minimizes unnecessary usage leading to cost
savings and efficient resource utilization improved efficiency ensures actions are performed at the right time
streamlining development and deployment processes enhanced control provides developers with control over pipeline
execution improving management and coordination of development efforts so that's an overview of pipeline trigger
rules the next topic we'll be covering are the types of pipelines classic pipelines provide a graphical interface
for creating and configuring pipelines using a drag and drop approach this simplifies defining the stages of your
pipeline such as build test and deployment steps to create a classic pipeline widen select pipelines from the
left side menu two click new pipeline to start three choose the repository where your source code is located four select
a pipeline template based on your application type such as aspn node.js and Java five customize the pipeline
stages tasks and configurations add tasks like building the code running tests and deploying the application six
save and run the pipeline yl pipelines offer a flexible code Centric approach to defining pipelines pipeline
configurations are defined as code in yaml files which can be version controlled along with your source code
for easier collaboration and consistency across environments creating a yl pipeline why select the pipelines option
from the left side menu to click on the new pipeline button three choose the repository where your source code is
located four select the yl option when prompted to choose the pipeline configuration style five create a yl
file in your repository to Define your pipeline configuration the yaml file should contain stages jobs and tasks for
your requirements for example as shown on the image on the right six save the yl file in your repository and commit
the changes seven Azure devops will automatically detect the yaml file and create the pipeline base on the
configuration defined in the yaml file pay to save and run the pipeline to see it in action so that's an overview of
indicates the number of failed builds or deployments over a specific period monitoring the failure rate helps you
identify potential bottlenecks or issues in your pipeline Azure monitor can be used for this purpose Azure monitor
provides comprehensive monitoring for Azure resources including pipelines it collects data on Pipeline failures and
allows you to set up alerts based on specific failure thresholds these alerts help you address issues proactively
maintaining pipeline Health configuration enable metrics and diagnostic logs metrics include success
rate average duration and failure rate diagnostic logs capture detailed information about pipeline runs
including errors and warnings here's an example of how you can enable Azure monitor for a
pipeline duration monitoring monitoring the duration of your pipeline is crucial longer durations indicate performance
issues that can impact overall efficiency Azure Dev Ops provides built-in capabilities for this Azure
pipelines allows tracking the duration of individual pipeline runs identifying outliers and analyzing performance
Trends over time use the Azure pipeline's rest API to retrieve the duration of pipeline runs custom scripts
or Powershell can automate monitoring and Reporting here's an example of how you can retrieve the duration of
producing consistent results leading to false positives or false negatives monitoring and addressing flaky tests is
tools to detect and monitor flaky tests Azure test plans can manage test cases track executions and identify flaky
tests group test cases and test Suites schedule test runs and capture results analyze results to identify and Mark
flaky tests built in reporting visualizes Trends and tracks improvements over time here is an
example of how you can Mark a test case as flaky using Azure test plans so that's an overview of monitoring
exampro and in this section we'll be covering Azure container instances Azure container instances allow you to launch
containers without the need to worry about configuring or managing the underlying virtual machine Azure
container instances is designed for isolated containers they are tailored for simple applications task Automation
and tasks like build jobs containers can be provisioned within seconds whereas VMS can take several minutes containers
are built per second whereas VMS are buil per hour providing potential cost savings containers have granular and
custom sizing of vcpus memory and gpus whereas VM sizes are predetermined ACI can deploy both windows and Linux
containers you can persist storage with Azure files for your ACI containers once deployed aciis are accessible via a
fully qualified domain name like custom label. aure region . aure container. Azure provides quick start
images to start launching example applications but you can also Source containers from Azure container registry
of containers that get scheduled on the same host machine the containers in a container group share life cycle
resources local network and storage volumes container groups are similar to a kubernetes pod multi-container groups
currently support only Linux containers there are two ways to deploy a multi-container group to deploy a
multi-container group you can use either a resource manager template if deploying additional Azure service resources or a
yaml file for deployments involving only container instances overall Azure container instances simplify container
deployment and scaling removing the complexities of infrastructure management the next topic we'll be going
over our container restart policies a container restart policy specifies what a container should do when their process
has completed these policies ensure that the container instances can handle different scenarios effectively based on
the specific requirements of the application or task Azure container instances has three restart policy
options always this policy ensures that the containers restart continuously regardless of whether they exit
successfully or not it's useful for applications that need to be constantly available such as web servers never with
this policy containers do not restart once they've completed their execution this is ideal for tasks that are
designed to run once and then terminate such as batch jobs or scheduled tasks on failure containers will only restart if
they stop due to an error or unexpected termination this ensures that if a container crashes or faces an unexpected
error it will try to restart and continue its operations overall choosing the appropriate restart policy is vital
the next topic we'll be covering our container environment variables environment variables are key value
pairs that can be used to configure and manage the behavior of applications running inside containers environment
variables allow you to pass configuration details to your containers which can be critical in guiding
applications on how to connect to databases where to find certain resources or how to adjust their
behavior based on the environment they're running in in Azure you can easily set up these environment
variables by default environment variables are stored in plain text to address this Azure offers the option to
secure your environment variables instead of storing them in plain text which could expose sensitive information
if breached you can Leverage The secure environment variables flag so that's a quick overview of container environment
variables the next topic we'll be covering is container troubleshooting troubleshooting containers in Azure
involves a series of commands that help diagnose and resolve issues as container logs this command lets you fetch logs
from your container these logs can provide insights into application behavior and possible errors as
container attach if you need diagnostic data during container startup use this command it helps in understanding issues
that might arise during the initialization phase of a container as container exec for a deeper dive into
the Container this command starts an interactive session this is useful for live debugging and to inspect the
container's current state as monitor metrics list this command gives you metrics related to your container such
as CPU usage which can be essential for performance tuning or identifying bottle X so these are the commonly used
from exam Pro and we're going to take a look at Azure container instances so here it is so all we got to do is go to
container instances we'll hit add and the nice thing is that Azure provides us with a Hello World one so it's very easy
for us to get started um it's a Linux machine and it looks like it's pretty inexpensive there so we'll stick with
that I'm going to create a new group here we're going to call it banana um and we'll name the container instance
banana and East Us 2 seems fine to me you'll notice we're on a quick start image if we wanted we could use
something from the docker Hub and provide our own link but we'll just stick with the quick uh start image for
today okay we're going to go ahead and hit next to networking just to see what we have as options you can make it
public or private we'll go to Advanced hold on here yep those are just the ports you can expose we'll go to advance
and for the restart policy we can set on failure always or never we can pass in environment variables and I covered this
a lot more in detail in the lecture content so we don't need to really dive deep into this um and we'll go ahead and
create this instance and so we'll have to wait a little while here and I'll see you back
in a moment okay and so after a short wait our container instance is ready we'll go to that resource there and take
a look around so on the left hand side we can go to our containers and there we can see it running we can see the events
down below of what's going on so you can see that it's pulled the image it successfully pulled it and it started
to see stuff and if we wanted to connect to the instance we could also go here and hit connect which is kind of nice um
I don't have any purpose to do that right now so and it's also not going to work the way we're doing it but I just
wanted to show you you had those opportunities uh you can do identity so that means manage it with ro base access
controls but what I want to see is actually this uh hello world working I'm assuming that must be a a hello page
I've never looked at it before so we're going to go here grab the public IP address and paste it on in the top and
there we go so we have deployed a instance onto Azure container instances or a container I should say so nothing
super exciting to talk about here um but we do need to know the basics uh there um if we wanted to deploy other
containers it's just the one there so that's all you really need to do um but yeah so yeah hopefully that uh gives you
an idea there I'll just go back to the list here so we can see it and we'll go ahead and just uh delete that probably
do it for the vi the resources on the left hand side like I always like to do uh and we will go into banana
allow you to customize the agent environment to meet specific needs unlike Microsoft hosted agents
self-hosted agents run on your infrastructure giving you more control over the environment and the tools
installed on the agents use cases custom environments for specialized soft software or configurations not available
in Microsoft hosted agents sensitive data suitable for projects with stringent data security requirements
keeping data within your network resource intensive builds useful when builds require significant computational
resources or specific Hardware setups benefits customization tailor the environment to specific project needs
cost efficiency reduce costs by utilizing existing infrastructure consistent configurations ensure
consistent setups across agents through templates or contain anization scalability Scale based on workload by
provisioning new agents configuring self-hosted agents with VM templates create a virtual machine with the
necessary agent software save it as a template and use the template to provision additional agents for
consistent and simplified scaling create and configure a VM set up a virtual machine with the necessary operating
system and dependencies install and configure the Azure devops agent software create a VM template capture
the VM as a template including the agent software at its configuration provision new agents use the template to quickly
provision new agents ensuring each new VM has the same configuration and is ready to connect to Azure Dev Ops
managing self-hosted agents with containerization containerization involves creating Docker images that
include the necessary agent software and dependencies offering a flexible and scalable solution for managing
self-hosted agents create a container image develop a Docker image with the required agent software and dependencies
including configuration detail push to a container registry store the container image in a registry like Azure
container registry or Docker Hub deploy and scale containers use container orchestration tools to deploy and manage
agent containers scaling up or down based on workload demands so that's an overview of self-hosted
covering the types of deployment strategies a successful deployment strategy is crucial for efficent
software delivery and minimizing user impact Microsoft devop Solutions offers various deployment patterns and
strategies for continuous delivery and high availability blue green deployment is a technique that reduces risk in
downtime by running two identical environments these environments are called blue and green only one of the
environments is live with the live environment serving all production traffic in blue green deployment blue is
live and green is set up for the new release deploy and test the new version in green without affecting blue after
testing switch traffic to Green making it live this minimizes downtime and allows for quick roll back to Blue if
issues arise enhancing reliability and user experience so that's an overview of blue green
to a small subset of users or infrastructure first this helps identify unforeseen issues under real conditions
with limited impact by monitoring the initial deployments performance teams can decide whether to expand halt or
roll back the update based on real world feedback this phase approach ensures a safer and more controlled rollout
minimizing risks associated with new releases to execute a canary release feature toggles selective feature
activation traffic routing manage user access to the new version deployment slots version management these
techniques enable a gradual and monitored roll out allowing for adjustments based on user feedback and
system performance this method enhances application stability and user experience by addressing potential
issues before a full scale deployment so that's an overview of canary release ring deployment is a strategy
where users or infrastructure components are divided into multiple groups called Rings each ring receives updates
sequentially starting with the smallest group and gradually expanding to the entire user base this allows for
iterative releases starting with an internal ring and progressively reaching a wider audience this controlled roll
out facilitates early feedback from different user groups or components for example in a three-group production
setup canaries voluntarily Test new features as soon as they are available early adopters voluntarily preview
releases which are more refined than Canary versions users receive the final product after it has passed through
canaries and early adopters this approach ensures a safer step-by-step roll out of new features or updates so
technique that gradually increases the number of users exposed to a new release initially a small percentage of user
traffic is directed to the new release while its performance and behavior are monitored if the new release performs
well without issues the traffic is gradually increased this approach helps identify potential issues early and
minimizes user impact if problems arise continuous integration toe checkin developers check in code auto trigger CF
pipeline triggers automatically build artifact code is built generating an artifact continuous deployment build
version one approved and deployed to ring one for initial testing build version two approved and deployed to
ring two for early adopters build version three approved and deployed to ring three production environment ring
one Canary's Test new features Ring 2 early adopter preview refined releases ring three all users receive the final
product so that's the progressive exposure deployment strategy feature flags are conditional
statements that let you control which features or functions are visible and available in your application using
feature Flags you can enable or disable features dynamically without redeploying the application this allows for testing
new features gradual rollouts or quick deactivation if issues arise feature Flags also offer the flexibility to
Target specific user groups for testing or rolling out features new feature a new feature is introduced feature Flags
toggle switches control the visibility of the feature enabled the feature is enabled for specific customer groups
disabled the feature remains off for other customer groups customers different customer groups experience the
feature base on the feature flag settings this allows control targeted rollouts and quick adjustments of
feature availability without redeployment so that's an overview of the feature flag deployment strategy
to see which one performs better users are divided into two or more groups each group experiencing a different version
data on user Behavior engagement and other metrics are collected this approach helps make data driven
decisions and optimize feature usability before a full roll out to all users user groups users are divided into two groups
version a and version B each group is shown a different version of a feature or user interface for performance
comparison user behavior and performance metrics are collected and compared results the graph shows which version
performs better based on the collected data this helps in making informed decisions about which version to
implement for all users so that's an overview of a b testing Azure traffic manager operates
at the DNS layer to quickly and efficiently direct incoming DNS requests based on the routing method of your
choice routing methods performance rote traffic to nearby servers to reduce latency weighted distribute traffic
based on predefined weights priority route to a primary in point with fail over to backups if needed Geographic
Route traffic base on the user's location multivalue return multiple healthy in points and DNS responses
subnet route based on the user's IP address uses latency reduction route to nearby servers to minimize latency
failover switch traffic to backups if the primary system fails a B testing Route traffic to random VMS for testing
in this scenario we have an example of weighted routing where traffic is split 80% to production and 20% to Beta
manager hey this is Andrew Brown from exampro and in this section we'll be covering Azure app service Azure app
service is an HTTP based platform for web apps re estall apis and mobile bin services is you can choose your
programming language in Python Java or any other language and run it in either a Windows or Linux environment it is a
platform is service so it's the Heroku equivalent for Azure Azure app service takes care of the following underlying
infrastructure OS and language security patches load balancing Auto scaling and infrastructure management Azure app
service makes it easy to implement common Integrations and features such as Azure Dev Ops for deployments GitHub and
dockerhub package Management Systems easy to set up staging environments custom domains and attaching TLS or SSL
certificates you pay based on an Azure app service plan shared tier includes free and shared options litex isn't
supported here dedicated tier includes basic standard premium premium to premium 3 and there's isolated tier as
your app service is versatile you can deploy single or multi-container Docker applications when you create your app
you have to choose a unique name since it becomes a fully qualified domain overall Azure app service simplifies
lifting let's delve into runtimes and Azure app service so what is a runtime environment a runtime environment refers
to the software and settings needed for a program to run in a defined way at runtime a runtime generally means what
programming language and libraries and framework you are using a runtime for Azure app services will be a predefined
container that has your programming language and commonly Ed library for that language installed with Azure app
Services you're presented with a range of run times to choose from including net. net core Java Ruby node.js PHP and
python moreover Azure app Services generally supports multiple versions of each programming language for example
for Ruby you might find versions 2.6 and 2.7 it's worth noting that cloud providers including Azure May phase out
support for older versions over time this not only ensures that they're offering the latest and most efficient
tools but also promotes better security practices among users pushing them to keep up with the latest patches so
that's an overview of runtimes and Azure app service the next thing we'll be covering
are custom containers and Azure app service Azure app service gives you the flexibility to use custom containers for
both windows and Linux the primary reason you might opt for a custom container is to use a distinct runtime
that is natively supported or to incorporate specific packages and software here's a straightforward
process to get started with custom containers in Azure app service design your container Begin by creating a
Docker container tailored to your needs on your local machine push to Azure once your container is ready push it to the
Azure container registry this centralized repository ensures that your container is easily accessible within
Azure deploy and go live finally deploy your container image directly to the Azure app service once deployed Azure
takes care care of scaling maintenance and updates another advantage of custom containers in Azure app service is that
they offer more granular control over your environment you can fine-tune performance security and other aspects
our deployment slots in Azure app service deployment slots allow you to create different environments of your
web application Associated to a different host name this is useful when you require testing staging or QA
environment alongside your production setup deployment slots let you swiftly replicate your production setting for
various purposes ensuring consistent testing environments you can also swap environments this is useful for
executing blue green deployments by using swap you can promote your staging environment to production with these you
can promote our staging to production by swapping if something goes wrong you could swap them back this capability
ensures minimal downtime and enhances the user experience since you can introduce changes in a controlled manner
rolling them back if necessary in addition Azure ensures that when swapping the instances are warmed up
before traffic is routed resulting in zero downtime so that's a quick overview of deployment
slots the next topic we'll be covering is the app service environment in Azure app Service app service environment is
an Azure app service feature that provides a fully isolated and dedicated environment for securely running app
Service app apps at high scale this allow you to host windows and Linux web apps Docker containers mobile apps and
functions app service environments are appropriate for application workloads that require very high scale isolation
and secure network access and high memory utilization customers can create multiple ases within a single Azure
region or across multiple Azure regions making ases ideal for horizontally scaling stateless application tiers in
support of high requests per second workloads as's comes with its own pricing tier called the isolated tier
ases can be used to configure security architecture apps running on ases can have their access gated by Upstream
devices such as web application firewalls app service environments can be deployed into availability zones
using Zone pinning there are two deployment types for an app service environment external ass and
ilbs external ass exposes the ass hosted apps on an internet accessible IP address if the v-net is connected to
your on premises Network apps and your ass also have access to resources there without additional configuration because
the ass is within the v-net it can also access resources within the v-net without any additional
configuration ilbs exposes the ass hosted apps on an IP address inside your v-net the internal and point is an
service the next thing we'll be going over is deployment in Azure app service so what is deployment well it's the
action of pushing changes or updates from a local environment or repository into a remote environment Azure app
Services provides many ways to deploy your applications including run from package deploy zip reward deploy VI FTP
deploy via Cloud sync such as Dropbox or one drive deploy continuously with GitHub bitbucket and Azure repos which
using kudu and Azure pipelines deploy using a custom container CI CD pipeline deploy from local git deploy using
GitHub actions deploy using GitHub actions containers and deploy with template run from a package is when the
files in the package are not copied to the WW root directory instead the zip package itself gets mounted directly as
the re only ww root directory all other deployment methods and app service have deployed to the following directory for
Windows deol we use back slashes home site ww root for Linux we use for/ home site ww rout since the same directory is
used by your app at runtime it's possible for deployment to fail because of file lock conflicts and for the app
to behave unpredictably because some of the files are not yet updated zip and War file deployment uses
the same kudu service that powers continuous integration based deployments kudu is the engine behind get
deployments in Azure app service it's an open-source project that can also read outside of azure kudu supports the
following functionality for zip file deployment deletion of files left over from a previous deployment option deter
on the default build process process which includes package restore deployment customization including
running deployment scripts deployment logs and a file size limit of 2,48 megabytes you can deploy using Azure CLI
Azure API via rest and Azure portal you can use file transfer protocol to upload files you will need
your own FTP client you just drag and upload your files go to the deployment Center get the FTP credentials for your
FTP client you can use Dropbox for one drive to deploy using a Cloud sync Dropbox is a third-party cloud storage
service one drive is Microsoft's cloud storage service you go to deployment Center configure for Dropbox or one
drive when you turn on sync it will create a folder in your Dropbox Cloud Drive one drive apps Azure web apps
Dropbox apps Azure this will sync with your home site ww root so you just update files in that folder in summary
developers the next thing we'll be covering is the Azure app service plan Azure app service plan determines the
region of the physical server where your web application will be hosted and defines the amount of storage RAM and
CPU your application will use it offers several pricing tiers shareed tiers there are two shared tiers free and
shared free tier provides this tier offers 1 gab of disk space supports up to 10 apps on a single shared instance
provides no availability SLA and allows each app to compute quot of 60 minutes per day share tier provides hosting
multiple apps up to 100 on a single shared instance no availability SLA is offered and each app gets a compute
quote of 240 minutes per day it's worth noting that Linux based instances are supported in this
tier dedicated tiers basic standard premium premium 2 Premium 3 basic offers more disk space unlimited apps three
levels in this tier that offer varying amounts of compute power memory and disc storage standard allows scaling out to
three dedicated instances guarantees 99.95% availability and also has three levels with varying resources premium
provides the ability to scale up to 10 dedicated instances and ensures 99.95% availability and it includes
multiple Hardware level options isolated tier dedicated Azure virtual Network full Network and compute
isolation scale to 100 instances and availability SLA of 99.95% so the Azure app service plan
needs hey this is angrew Brown from exam Pro and we are going to be learning about Azure app services in this follow
along uh and it's a service that's supposed to make it easy for you to deploy web applications I say supposed
to because it really depends on your stack Azure has more synergies with other stacks and others so like if
you're like me and you like Ruby on Rails you're going to find a lot of friction with rails and Linux but if
you're using something like Windows servers or python orn net you're going to have a much easier time still really
great service just wish they'd make it a bit more broad there but let's hop into it so before we can go use that service
let's make sure that it's activated and so we'll go over here and we'll go to Azure subscription and then down below
we're going to go to Resource provider now you think what you could do is just type in app Services uh and you'd be
wrong because the the service is under a particular provider if you want to figure out what provider it is we can go
so if I search for Azure app Services it's under web and domain registration so we're going to make sure
this is registered if we're using a custom domain which we are not today we need this one activated so going back
here I will type in web and you can see it's registered so if yours is not registered go ahead and hit that I
believe this by default is generally registered with new Azure accounts so I don't think that is an issue for you but
add um and so I'm going to give it a new name I just made it a moment ago but I'm going to try again and try to use the
same name so we're going to call this Voyager Great and then I'm going to go ahead and name this Voyager and I
already know that that is taken so I'm going to type in Delta Flyer and these are fully qualified
domains so they are unique with Azure app Services you can run a Docker container we're doing code this time
around and what I like to use is a ruby um but again you know if I want to use the cicd I'm not going to be able to the
deployment center with Ruby so that is not possible um and so we're going to go with python and run either a flask or
ajango app I haven't decided yet I am in Canada so let's go to Canada east and uh down below here we have the plans
generally the plans will tell you the cost underneath look you'll notice that it's loading but I just want to show you
that there are some discrepancies in terms of pricing so if I was to go to Azure app Services
pricing and we were to pull this up here we can kind of see the pricing here okay and if we scroll on down right
now we're looking at a premium V2 uh and oh no I don't need help I'm okay you'll notice that it's 20 cents per hour so if
I go here and do that times 730 because there's 730 hours in the year that's $146 I believe this is showing me in USD
dollar yeah and in here it's showing me 103 Canadian which is lower um so it could be that because I'm running in a
Canada east region it's the price is different but you could imagine that if I had this at this cost at uh what did
we say here um at 146 USD to CAD I'd actually be paying $182 so you got to watch out for that
kind of stuff but I'm pretty sure this is what the cost is so just be aware that if you look stuff up in here it's
not necessarily reflective so you got to do a little bit more work to figure that out uh if we wanted to go here uh we
cannot choose the free tier when we're using Linux if we're using Windows I believe we can use it we're working with
Linux today so that's just how it's going to be um for the B1 this is totally fine but we want to utilize
deployment slots deployment slots is an advanced feature of uh the production version and that's the only way we're
going to be able to use it here this is 20 cents per hour again so I don't want to be doing this for too long but I
think what we'll do is before we do that we can just do an upgrade to Dev to prod so we can experience that I'm going to
go and just choose B1 okay so go next um we do not need any application insights for the time being and it will not let
us so it's okay we'll go next review and create and we'll go ahead and create this resource here and I will see you
back when this is done so um our resource Now set up we'll go to Resource and now that we're in here you'll notice
if we hit browse we're not going to see anything because we do not have anything deployed which makes sense right uh so
we're going to actually have to go ahead and deploy something so we are going to make our way over to the deployment
Center and uh it's just going to tell us that we have yet to configure anything and that's totally fine we're going to
go to settings it'll give it a moment and so the thing is is that we're going to need
something to deploy um I did not create an app but the great thing uh is in the Azure documentation they have a bunch of
quick starts here all right and apparently they have one for Ruby as well but today we are looking at python
uh and so they actually have an example repository for us here which is github.com asure samples python docs
hello world and I mean I could go make a repo for you but we might as well just use the one that is already provided to
us so I'm just going to pull this up to show you what's in it it's a very very simple application even if you don't
know anything about building web apps I'm going to walk you through it really easily here okay so we're going to open
up app.py so we are using flask if you've never heard of flask it is a very minimal python framework for creating
web apps uh very uninspiring uh homepage here but it gets the job done it's going to create a default route for us which
uh we have there we're going to call hello here and we're going to have hello world so that's all that's going on here
very very simple and we have our requirements this is our package manager I don't know why python uses txt files
it's very outdated to me but that's what they use and here we have flask all right so we're going to use that
repo and it's a public repo so it should be very easy for us to connect so we'll drop down go to
GitHub and uh the next thing we need to do is authorize GitHub all right so I ran into a bit of trouble there because
I could not uh authenticate my uh GitHub account but you know what I just made another GitHub account so that made it a
lot easier I'm going to go ahead here hit GitHub and we're going to try to authorize it and so now I'm logged into
this new one called exam Pro Dev and we'll go ahead and authorize this application and we're now in good shape
this repository doesn't have anything in it so um if I want to clone something I guess I'll probably have to Fork that
repo so we'll give it a moment to authorize and while that's going I think that's what I'm going to do I'm going to
examples or something samples or examples all right so I found a way around the problem I just made a new uh
GitHub account so that's all I had to do um and I just won't be using my primary account until I get my phone back but um
because it already connected to this new one called exam Pro Dev you might have to put your CR itial in here and it's
going to ask me to select some things it's a new account so there are no organizations there are no repositories
there are no branches totally brand new so what I'm going to need to do is get a repo in there so we'll just go ahead and
world and if I type that right we're in good shape I'm going to go ahead and Fork this
repository I'll say got it and then I'll move this off screen here this is now cloned you should see it
cloned here and we'll go back here and this probably isn't live so there's no refresh button here so we'll have to hit
discard and we will give this another go here and we will select our organization which is our name there is the
repository uh should be main branch is kind of outdated I'm sorry but it's called Master that's what it is not my
fault that's azure's fault okay um and I think that's it I don't know if we need a workflow configuration file I
don't think so just going to double check here no I don't think so and uh what we'll do is
[Music] all right so now that that's all hooked up if we were to go to browse we're
actually still seeing the default page a deployment hasn't been triggered just yet uh so the way it works is it's using
GitHub actions so if we click into our call it main branch I know they got the wrong name but uh we're going to click
into our GitHub workflows and then below here we can see we have a yaml file uh and this is for GitHub actions
integration here and so what it's doing is it's specifying the branch uh what how it's going to uh build going to run
onto bunto latest the steps it's going to do it's going to check it out it's going to set up the python version it's
going to build it it's going to do that stuff and so in order for this to um take action we'd actually have to go
ahead and make some kind of manual change which we have yet to do so e so what we'll do is we'll go back to our
I'm not sure how it's supposed to know that it's supposed to be doing the hello we oh I guess yeah sorry so this means
it's going to Route over to here um so I'm just going to make any kind of change here doesn't matter what it is
commits we should see that I made that change there it is we'll go back over here and this should be
deploying um so if we go over to logs here you can see one's in progress right now okay and so that's what we're
waiting we're just going to see that finish there we could probably open the logs and get some more information there
and so it just brings you back over to GitHub actions and so here's GitHub actions and it's performing the stuff
here so we're just going to give it time here and I'll see you back in a moment so we didn't have to wait too long it
only took 1 minute and 29 seconds if we go back over here um we might need to do a refresh and so we can see this is
reflected over here and so if we go back to it doesn't really matter if we go to settings or here but I'm going to hit
browse and see if my page is deployed it still is not so we do have a small little problem here and it's really
going to just have to do with how the app is serve so that's what we need to figure out next all right so our app is
not currently working and uh there's a few approaches we can take and the thing I can think right away is we should go
in SSH into that instance if you scroll on down here from developer tools you can go to SSH and click this button and
that's going to SSH you right into that machine right away you can also uh access SSH via the
um CLI command so I believe it's like it's like a web app um SSH it'll do the exact same thing
you do that from the cloud shell but that's not what we're doing today I give this an LS in here and we're in Linux we
can see we have our app here and uh what I would do is I would see what's running so I I would do a puma uh or sorry not
Puma PS Ox grep uh python and you can notice that we have a gunicorn that's running so that is where our python
instances are running so you're not looking for flas you're looking for python here and if we wanted to make
up Port 80 so that tells me that because like curl just means like let's go look at that page um it should return some
HTML like print out the HTML to us so that means the app is not running um so what you could do is run flask run and
do is I can go up uh back to my deployment Center here and I'm going to go get that link
here and just ignore the fact that it's working uh it's it's not working right now I know for certain it's not um but
if we do 5,000 that won't resolve because Port 5,000 isn't open so we can't really just uh put 5,000 in there
here okay it will work uh this is not a great way because of course as soon as you kill it here uh technically the S
should stop running um and so you'll run into that step uh so what we need to do is provide a configuration to gunicorn
which is a python thing again it's not so important that you know how like what these things are but the idea is that
you understand as administrator you want to make sure you have an app that runs after you do a deploy and so in this
particular one we need a startup. txt uh and interestingly enough there is a example code by the same author of the
other one we were looking at here I believe it's the same person or it might not be but uh they
have a startup txt right and so in here you can see that it binds on Port 0000 it starts up four workers starts up the
will do is I will go back to my GitHub repository that we have here and I can just go ahead and add a
new file so I'm going to say um add a file create a new file here we'll call it startup.
txt I'm going to copy this command here and paste it in there so gunicorn will bind the workers and startup on the app
um startup app is being ran by uh something here so if I go back here I think they have a startup High here and
me just see here there's is a slightly different eh so they actually have like a full app going on
up um and I'll just put this tab up here so we'll be back here in 2 seconds and if I give this a nice refresh yeah
you can see it deploys in progress so uh this doesn't take too long we'll just wait close that there we'll just wait a
few minutes we click logs it just opens it back up here and we'll see how that goes all right so uh your deploy may
have finished there but the thing is is that we're not going to really know if uh a change has taken effect unless we
actually go ahead and update our code so what I want you to do is go to your code tab go to your app.py we'll hit edit and
I'm going to go ahead and change this to Vulcan and then we'll scroll on down hit commit
changes and we'll make our way back over to our deployment Center and we'll give it a refresh here and we're just going
to wait until this one is complete and we will double check to make sure that that is changed if it is not we will
take action to fix that okay all right so we just waited a little while there for that deploy to happen and if we go
to our website here it is taking effect so that's all we had to do to get it working so that's pretty good um so that
order to utilize this feature we're going to actually have to upgrade our account because we cannot utilize them
at this uh the basic plan here we got to go to standard or premium so let's go ahead and give that an
upgrade uh so here's the B1 we're going to go to production here um and I think yeah we're going to have to choose
this one here uh very expensive so the thing is we going to just upgrade it temporarily unless there's more options
down below that are cheaper yeah these are the standard tiers let's go with this one here
because it's only $80 again we're not going to be doing this for long but I want to show you how to do staging slots
and auto scaling okay so we'll go ahead and apply that there and now it says that it's applied
so if I go back to our app here and we click on deployment slots sometimes it doesn't show up right away if it doesn't
that's not a big deal you just wait a bit but today it's super fast so we're going to go ahead and add a new slot
we're going to call it uh staging we're going to deploy from our production Branch here and I'm going to go ahead
okay great so we waited a little bit there and uh our slot is created so I'm going to just hit close there and so now
let's go take a look and see if we can actually see the application here so I just clicked into it I click browse and
we're getting the default page so nothing is actually really deployed to it uh so how are we going to do that
that's the the main question here um so what I'm going to do is I'm going to make my way over to the deployment
Center and you can see that it's not configured for the slot so we are going to have to set it up all over again even
though it copied over configuration settings it didn't copy over the code so we go to GitHub we'll choose our
organization again I'm going to choose the repository we're going to choose that main branch again there we're going
to let it add a workflow and notice that this time it's going to call it staging yaml so there'll be a separate workflow
that gets created we're going to go ahead and save that there and what we can do is again click
onto our Branch name there and if we click into our workflows we'll not now notice that we have a staging example
it's the same thing um but it should be able to now deploy so the whole purpose of um these deployment branches is that
it helps us uh we can deploy different versions of our apps but also um it's just a place where we can uh uh view
things before we actually roll them out so we want to make sure 100% that they are working correctly um I don't think
this will automatically push out let me just go to my actions to see if this is deploying notice that we have two
workflows now we have staging here uh and yeah it looks like it's going to deploy here so we'll just wait a little
bit um but maybe what we can do is try to have a a slightly different version uh for each one here okay uh but we'll
just let that finish and I'll see you back in a moment all right so our deploy finished
there so now if we go back to our website here we go browse we should see that application it says hello Vulcan
and if we go and take out this we still have hello Vulcan so how can we have a uh a variant of this so that we can push
out to that so what I'm going to do is I'm going to go back to my application here I'm going to go to code and I'm
just going to make a minor change um I don't say also is that spelled right startup doesn't look correct to me um so
maybe I'll go ahead and adjust that file but it doesn't seem to be affecting anything which is I'm a bit surpris
there so what I'll do is I'm going to go and edit that file and give it the proper name can I rename this file yes I
can so we'll call that startup file I thought we need that for deploy I guess it just works without it which is
nice uh if we go back here I'm going to go and I actually just want to edit my um app here again and I'm going to go
and edit this and we'll say um hello Andor or hello andorians maybe and so if I go back to my actions
the question what is it deploying is it going to deploy the production or the staging and it looks like it's going
one way we could tell is we can go to our logs here and we can see that um so we did to deploy so there's one change
here uh if we go back to our main application and our deployment Center here and we go over to our
logs you can see that they're both deploying so it doesn't seem like it's a great thing that that's how it works so
the question is is then how would we um facilitate that deploy right how could we do that I suppose what we could do is
just make a separate staging Branch um so if I go over to code here um I don't think we can just make
branches through here so what I'm going to have to do is go ahead and oh I can create a branch right here so we'll just
type in staging and we'll go create ourselves a new branch and now we are in this branch and
Branch so you think what we could do is go in and just change our settings so that it deploys from that one uh we'll
go back to our deployment slots we'll click into staging here and we need to change our
configuration settings um I think we could just do it from um here hold on here I could have swore
it specified the branch if we go to deployment Center here I think it's set up on that other Branch there I think we
just adjust it here so yeah I think we could just um adjust these settings um we can't discard them but
here and and uh we will go ahead and click into here go into staging and we'll just change what the branch is
that and we'll see if it actually reflects those changes there so we will go here and hit
refresh we'll see if it picks up staging now if we go to settings it's not picking it up so um I'm not
sure I don't think perform a redeploy operation we don't want to redeploy so maybe what we'll do is just we'll have
to do a disconnect here because it's collect it has the wrong one here so save workflow file
um okay we'll just go ahead and delete it it's not a big deal we'll just have to make a new one
here we'll go to GitHub we'll choose our uh organization again or repository our staging Branch this time around we'll
let it add one see it says we use an available workflow so we could have kept it there and added it there um and we'll
go ahead and save that so now we'll have two separate branches there and we'll give that some time to deploy because
that will now trigger a deploy off the bat and so I'll see you back here in a moment all right so after a short little
wait here it looks like our app is done deploying so we'll go over here we'll make sure that this is our staging
server is good and we want to see that our production is different perfect so we now have a way to deploy to each one
but imagine that we want to swap our traffic so we're happy with our staging server we want to roll that out to
production and that's where we can uh do some swapping so what we'll do is click the swap button and we're going to say
the source is the staging and this is our Target production and we're going to perform that swap uh right now we can't
do a preview because we don't have a particular setting set that's okay and it's kind of showing if there are any
changes so set of configuration changes we don't have any so that's totally fine as well we'll go ahead and hit
Swap and that's going to swap those two I believe it's has has zero downtime so we will be in good shape if that happens
and so if I was to hit refresh it so now say Klingons and if I go to my staging server it should be the other way around
something else that we can do um so notice over here we have these percentages here um not sure why it
won't let me change those so maybe I'll have to look into that so I'll be back into so I'm not
sure why it's not showing us that traffic slot there but what I'm going to do is just maybe try to trigger a deploy
back into our staging and maybe that's what it wants to see um so what I'm going to do is go back to my code here
we'll be in our staging Branch here I'm going to go ahead and uh edit this file here and we will just change this to
borans and we will hit update and we will let that go ahead and deploy so if we go to actions here we
can see that it is deploying um and we'll just give it some time okay so see you back here in a bit
I mean the other reason could be that we're just not at the main level hold on here uh if we go back back here to
deployment slots you know what I think it's just because I was clicked into here and then
I was clicked into deployment slots that they're both gray out yeah it is so we can actually do that top level there
doesn't hurt to do another deploy though so um we'll just wait for I'll wait for that deploy to finish and then we'll
come here and uh adjust that there okay all right so let's take a look at uh doing some traffic switching here so
right now we were to go to our production we have Klingons and if we were to uh go to our staging
we have Boran so imagine that we only want 50% of that traffic to show up so what we can do is put in
50% and what I'm going to do is um do I hit enter here or oh sorry save up here there we go um and so what's going to
happen is this should take effect I think right away yep uh and so now we have 50 50 50% chance of getting
something else here um so I'm just going to keep on hitting enter here if that doesn't work we can try an incognito tab
and there we go we got the opposite there and so this is serving up staging right uh and this is serving up
production but they're both on the production URL so that's a way you can split the traffic so uh that's pretty
scaling all right so let's take a look into how we can uh do some scaling with our app service this is only available
if you have Beyond standard and or standard and Beyond so standard and premium and Etc so if we just search for
scale we have two options here we have scale up and scale out so scale up is pretty straightforward that just means
to uh make our instance larger and so we already did that when we upgraded from our standard our our B1 over to our S1
here right so if I was to go here and I'm not going to do that but if I was to do that an upgrade um that would be
scaling up right and notice that we're talking about scaling so right now we're limited to 10 instances which is totally
fine but now let's take a look at scaling out so if we go to scale out here and go to Custom Auto scale what we
can do is we can Scale based on a metric so we can add or remove servers based on the demand of the current um web
applications traffic so we only paying for servers when we need them and so we have a minimum of one a maximum of two
that seems fine to me but we're going to add a rule here and I want to scale this on um the maximum time we're going to do
it on CPU uh percentage I just want to have a very easy way to trigger this so we can see a scaling event in action
here it has a maximum of 16% I might just lower that down even further will it let me type in there no so 16% is
what it's going to have to be it's not a big deal but I am going to reduce it down to actually I think sorry I don't
know why I was going here the the metric threshold to scale an action I'm going to put it at 10%
sorry okay so here's that line and so we have uh and I I like how you can drag it but you can kind of have an idea that we
have a high chance of having this um trigger I just want to do this so that we have a a good chance so if I was to
put it here you can notice that it's very easy for us to spike our traffic and and cause a scaling event now I'm
going to set the duration to 1 minute so we have a much higher chance of uh triggering that there okay set a
duration less than 5 minutes May generate uh transient spikes yeah that's fair but I mean I just want to show you
a trigger happen and we need a cool down time probably um and it's set to 5 minutes that's totally fine we're going
to add one and that looks fine to me I'm going to set this to maximum okay and so now we're very likely to trigger that
there we'll go ahead and hit add that uh instance there and so now that we have that we're going to go ahead and save
that and so now that that's there what we want to do is actually trigger a scaling event and so uh where we're
monitoring and uh what we're going to do is go over to um it should be in sorry I forgot uh the place where we need to go
take a look here is actually in the Run history here so if we go here and check 1 hour we can see how many instances are
running uh and I think if I dial it back here it should show me over time as it changes we do have a scale up event that
has happened which happened I guess four minutes ago um so I guess it gives you kind of an idea of how many instances
are running which right now are two um so maybe our uh maybe our scaling event is not uh in the best uh use case there
because it's happening too frequently so what I'm going to do is go ahead and modify that scaling rule um and so I'm
just going to go back and click here and maybe we'll just make it so it is less aggressive so what I'm going to do is
just change it so it's over the duration of five minutes and I'm going to just put it
that and so now if we go back to our run history here uh it still shows that it has um two as you can see here but I
want to see this drop back down to one so it's going to check every 5 minutes or or within the span of 5 minutes so
what I'm going to do is just uh wait here I'll see you back in a bit until we see a scaling action happens uh here
okay yeah I'm just sitting here waiting for it to scale down and I don't see it going so it makes me think that I need
to go ahead and set a scale uh scale down action let's take look at the one that we currently have uh so this one is
set oh you can see it's still spiked we don't even have anything going on here but what I'm going to do is just be
okay and so here we'll go back here and I'll save that and I just want to see if it scales
down I shouldn't have to set a scale down action should just go Um and what I'm actually going to do is be a little
bit more aggressive I know I'm setting a lot of stuff here but I'm going to just set it to duration of 1 minutes so we
can see this a lot sooner and uh we will go back to our run history here and we'll see if we observe
a scale down all right so um it's not scaling down here but uh I think it's probably because I need to scale out
action so what we'll do is go ahead and add a new rule uh this thing if we go here and we just look at
think it's necessary for us to set one here I think you get the idea um but that's for scaling so we're all
done here with Azure app Services all we got to do is go ahead and go ahead and delete it so let's go ahead and delete
our app here okay um so there's a few different ways we can do it I'm going to do it via
resource groups um I believe we called it Voyager here so click into that and I'm going to go ahead and delete the
go hey this is Andrew Brown from exam Pro and we are looking at what is infrastructure as code and before we
talk about that we need to talk about the problem with manual configuration so manually configuring your Cloud
infrastructure allows you to easily start using new cloud service offerings to quickly prototype architectures
however it comes with a few downsides so it's easy to misconfigure a service through human error it's hard to manage
the expected state of the configuration for compliance it's hard to transfer configuration knowledge to other team
members and so this is why uh infrastructure code is going to really help us out so um infrastructure is code
commonly abbreviated to IAC and you'll see that a lot in this course allows you to write a configuration script to
automate creating updating or destroying Cloud infrastructure notice I gave great emphasis on automate or aut ation
because that is really key to um infrastructure's code I can also be thought of as a blueprint of your
infrastructure I allows you to easily share version or inventory your Cloud infrastructure and just to kind of give
you a visualization imagine you write a script and that's going to uh provision uh and uh launch a bunch of cloud
Pro and we'll be going over Azure automation State configuration Azure automation State configuration lets you
define and enforce the desired state of your Azure VMS uring consistency across multiple machines you can specify
configuration details such as installed software Windows features registry settings and file contents so let's take
a look at the example we have on the right configuration definition simple config the configuration named targeting
the node MVM file creation ensures a file named hello.txt with the content hello azzure exists in the C drive Ure
is equal to present by applying this configuration you ensure that any VM assigned to the MVM node will have the
hello.txt file with the specified content this helps maintain a consistent and desired State across your Azure VMS
configuration hey this is Andrew Brown from exam Pro and in this section we'll be covering Azure resource manager Azure
resource manager is a service that allows you to manage Azure resources Azure resource manager is a collection
of services in the Azure portal so you can't simply type in Azure resource manager in the search tab it is a
management layer that allows you to create update or delete resources apply management features such as access
controls locks or tags and write infrastructure is code using Json templates we will be examining the
following key components that form the Azure resource manager layer we have subscription management groups resource
groups resource providers resource locks Azure blueprints as well as resource tags Access Control role based access
controls Azure policies and arm templates you can think of azure resource manager as a gatekeeper all of
the requests flow through arm and it decides whether that requests can be performed on a resource such as the
creation updating and deletion of a virtual machine and its arm's responsibility to authenticate and
authorize these requests arm uses azure's role-based Access Control to to determine whether a user has the
necessary permissions to carry out a request when a request is made arm checks the users assigned roles and the
permissions associated with those roles if the user has the necessary permissions the request is allowed
otherwise it is denied the next concept we'll go over is the scope for Azure resource manager
we've briefly covered scope in Azure policy and Azure rbac but we'll go into more detail with them in the following
govern your resource by placing resources within a logical grouping and applying logical restrictions in the
form of rules management groups are a logical grouping of multiple subscriptions subscriptions Grant you
access to Azure Services based on a billing and support agreement resource groups are a logical grouping of
multiple resources and resources can be a specific Azure service such as Azure VMS so that's an overview of azure
and in this segment we'll be diving into arm templates so what exactly is infrastructure is code infrastructure as
code is the process of managing and provisioning computer data centers such as those in Azure using machine readable
definition files like JSO n files rather than depending on physical Hardware configuration or interactive
configuration tools you write a script that will set up cloud services for you there are two main approaches to IAC
decare erative here you describe your desired outcome and the system figures out how to achieve it imperative here
you provide Specific Instructions detailing exactly how to reach the desired State arm templates or JSO n
files that Define Azure resources you want to provision and Azure Services you want to configure with arm templates you
can ensure a declarative approach meaning you merely Define your intended setup and the system handles the rest
and know exactly what you have defined for a stack to establish an architecture Baseline for
compliance or over arm templates Empower you to establish an architecture Baseline for compliance achieve
modularity break up your architecture in multiple files and reuse them ensure extensibility add Powershell and Bash
scripts to your templates test using the arm template toolkit preview changes before you create infrastructure via
template see what it will create built-in validation will only deploy your temp templ if it passes track
deployments keep track of changes to architecture over time policy is code apply Azure policies to ensure you
remain compliant use Microsoft blueprints which Forge a connection between a resource and its template
integrate with CI CD pipelines utilize exportable code letting you capture the current state of resource groups and
individual resources and benefit from Advanced authoring tools for instance Visual Studio code offers sophisticated
features tailored for crafting arm templates so as you can see arm templates has quite a lot of
so there are two uh primary ways of doing infrastructures code we have arm templates the arm stands for Azure
resource manager and the other one is azure biceps we're going to focus on the first one infrastructure as a code is
the concept of um uh defining all your infrastructure as code uh and that might be confusing
because that might sound like s the SDK or the CLI and uh it is confusing until you start working with it but the key
difference is that when you use the CLI or the SDK uh to programmatically create resources they don't keep uh track of
the State uh and so that is the key difference between that whereas if you ran a CLI command to create uh let's say
attempt to create a second um virtual machine whereas with infrastructure is a code if it's already there it's going to
either update it or say hey you can't update it there's already one that exists so the idea is that um uh it's
different in that process or in that sense uh there is a word for it um I believe the word is EP poent I always
have a hard time saying it but um that is the key difference between those programming methods and IC so what I
want to show you is uh arm templates and um arm again stands for Azure resource manager it's part of resource groups so
if we type in arm here we're not going to really get uh a service or anything like that we say Azure or Azure resource
manager okay you're just not going to uh exactly get that because um it is it is something that's there but it really is
talking about resource groups so when you deploy a resour group it will always create an arm template for you no matter
if you do click Ops um you'll always get an arm template and this is something that very different from other providers
so like when you use ads or gcp um when you launch a resource it doesn't necessarily will produce a a
template for you but Azure is very unique in that sense that they will do that so what I want to do is I want to
go ahead and uh explore some things with arm templates so you're very aware of how they work and I believe that uh
there is a way to uh deploy if we type in template here there should be something like deploy a custom template
whoops and that's how you would go about deploying a custom template and they actually already have some common
templates here so maybe we can uh take a look at one as a quick start and try to understand uh what these templates look
like another key difference between other cloud service providers is that it's not very common to write arm
templates by hand um in fact it's very tedious and you would not necessarily want to do it as opposed to ads where
you have cloud formation totally normal to do and that's why um having a layer on top of uh arm makes it so much easier
like using again Azure bicep or ter form but let's go ahead and create a Linux virtual machine here and notice that I
select the template and it has some options here so um what I want to show you here this looks like the usual
process for setting up a um a virtual machine but if we go here we can edit the template and then it's going to
allow us to see what this template looks like so arm templates this is what it look looks like and I believe that uh
they're only Jason I kind of forget Let's go ask chat GPT so are arm templates uh in Azure only Jason or can
they also be yaml the reason why I I don't remember is because I work with a lot of cloud service providers usually
they'll provide both options um but generally uh I always remember that arm templates are only Jason so there is no
yaml support uh for it but of course you could use yaml locally and then convert it over back to Json but anyway so if we
look at this here we have some things we have a schema that describes what the format of this Json should be um we have
some metadata which probably gets autogenerated or is additional U information to attach the template we uh
we have parameters so these are going to be values that we're inputting that allow us to uh make our template
reusable and if we scroll on down um yeah we got variables which is probably um the modification of
parameters and then we have our resources down below here and you know again if you've seen cloud formation
templates or uh deployment manager gcp deployment uh scripts these are going to look very very similar so we we Define
the type for the resource uh we have a name for it then it has its properties uh and then it can depend on other
resources to say what order uh they're performed in but anyway my point is is that uh this is a template and and we
can go ahead and actually uh deploy this template but I'm not that interested in that part of it because this is not that
exciting what's more exciting is what happens when you do click offs so I'm going to go over uh over here and uh
deploy a virtual machine um yeah I I mean we could do virtual machine I'm just trying to think
what's easier maybe we'll go ahead and actually do I've changed my mind we're going to actually go ahead and do uh
storage accounts again I'm still in the free tier and I'm just trying to make things easy for you as well
um and virtu virtual machines will spin up a lot of resources so maybe I don't want something that complicated uh for
this example but what I want to do is I want to launch a virtual uh a storage account and then I want to see how we
can look at the template and maybe we'll attempt to rein import the template and then delete the storage account and
recreate it so I'll go here and I'm going to hit create and we're going to create ourselves a new Resource Group
manager and today it's really thinking I don't know if I'm having internet issues here or if it's just really complaining
or if Azure is slow azure's uh sometimes their UI is not always responsive and we have to give this a name of course we
have to make sure this is very unique so I'm going to say my um storage account a bunch of numbers and then my
initials and it probably has too many letters so it's complaining there we go and I'll just let it choose whatever
region wants we'll have it on standard that's totally fine I'm going to go ahead and go to review where we can see
all of our options looks fine to me we'll go down the bottom hit create and we'll give it a moment here to create so
I'll be back in just a second all right so that deployment is complete and what I'm interested in is checking out uh the
resource Group but notice that when we've deployed this we have our inputs it shows us what we've inputed and we
looked at the uh parameters of a template before so it is creating literally an arm template and then
inputting the parameters it's showing this the outputs of uh that arm template and then here's the template itself so
every time you deploy Azure resources it is creating uh these IAC code for you and that is one of the greatest
advantages of azure as much as I complain about Azure this is one of their really really good features and
there's a few things you could do like it looks like we can add this to our library we can download this we can
deploy it again let's go ahead I usually don't uh fiddle with these too much but we can um import this template and I can
say uh my Resource Group but just remember you could have a lot of resources and really make that uh uh ret
templated I'm going to choose the same um area here and this will just be version 1.0.0 down here I'm going to go
to next it shows me the template next uh we can do some tagging I'm going to ignore that for now review and
create and so now we have our own uh template so that's kind of cool um if we go back to here I'm not exactly sure
where they oh we have to hit create maybe first it's not super clear but uh so that is now uh template has been
saved but where did it save to uh just says it's a template spec it is really taking its time to
templates here if they up here they I guess they show up under template specs uh we'll refresh this not sure why it
doesn't show up we know that we created it and you know I've said this in other videos where Azure doesn't always
propagate things right away and you have to wait so you have to have confidence and waiting for things to show up in
Azure so even though it's not here I know I created one okay so we go over here it says template uh spec succeeded
so it really really I think it would show up here um and I I really want to prove that it will will show up here
eventually so what I'm going to do is just take a break and I'm going to give it like 10 15 minutes I'm going to give
it a good chunk of time here and we'll come back and see if it appears um just to give you validation and confidence
that patience always uh pays off here in Azure okay all right so I've waited a good chunk of time was just talking to
Bo and um so I'm now I'm back and let's see if it is here we're going to give a refresh and look it's here so I told you
you got to be really really patient with Azure it is really known for being slow for uh some particular resources
and I when I say some I mean a lot so you know just have that patience there but anyway what I want to do is I'm
going to open up another tab we're going to go back over every time I open that new tab and wants me to log in that's
great um but what I want to do is go over to our resource groups and we're going to go into that new one that we
created and what I want to do is I just want to delete this resource here um and I want to see what happens if we attempt
to redeploy our uh template there I assume we're going to have to I mean we don't have to delete this one
but um I just want to move it completely and just try utilizing an arm template so we'll give it a moment there
and oh notice we have an upgrade button this is uh definitely new we might be going through our not sure why it's
appearing all of a sudden I've actually never seen that button before so it's really
interesting and nope I guess it's just maybe after a while they just kind of poke you and over here we're getting our
prompt so um it looks like we're starting to get some spend and you know I said earlier that Azure is really good
about telling us about um alerts and things like that much better than other providers um and so you know here it's
showing in Canadian dollars that uh we've already uh consumed half our spend not sure how I did that because I
haven't really done a whole lot but um what we'll do is we'll try to figure out where that that uh free spend has been
going um still again that's a lot um so maybe it's overestimating that or I left something running maybe that c cluster
is still running I don't think so let's go double check and but anyway I wanted that to
happen so I could go back and show a video of of that kind of stuff because all we created is a couple virtual
machines and some other things yeah we don't have anything else running that should be costing us spend but we'll go
take a look there and see what we can figure out so anyway I'm going to go back here give this a
refresh and um that resource is gone what's interesting is it also got rid of the template why did I get rid of the
template that that was something we created separately and so I guess it was linked to the resource Group and so that
kind of defeats the purpose of us uploading our arm template so that's a shame um I guess what we could do is we
template but there's not much interest in that so uh I'm not really interested in doing
that yeah I think we're pretty much done here I think we've we' proved the point that Azure is really good at producing
These Arm templates you're not going to want to write them by hand um you can ask CH PT to do it but I again I would
probably not do that myself but I think that satisfies this video for arm templates so we'll see you in the next
utilizing arm templates and I think it' also be really great to look at Azure bicep um because that is a more
productive way to write infrastructure as code um and honestly I really like Azure bicep I think it's really really
cool uh so what I want to do is I want to go ahead and go over back to GitHub we looked at creating a GitHub or
creating a repo in GitHub uh back in our SDK video that was a really messy video I'm really hoping that it's not as crazy
as that one but I don't like editing out any of the challenges uh here because I want to give you uh the full idea of of
what it looks like when trying to work through these things but I'm going to do is go over to GitHub and you should of
course go ahead and create yourself a GitHub account has a free tier uh and once you have your GitHub account we'll
make a new repo I'm going to uh drop down uh here and go to exam Pro and of course you'll just have one name here if
I have multiple accounts so there's a lot for me to create stuff in and what I'm going to do is make a new repo
called um Azure bicep example you can call yours whatever you like I'm going to make this public so that you can see
the code uh you can make yours priv private if uh private is available to you there but I going to make it public
because if you want to go find this repo at exampro here and copy it and work with it you can so we're going to go
ahead and use codes spaces and code spaces does have uh free credit usage you could do this locally um on your
local computer but you'd have to install a bunch of stuff uh I like using Cloud developer environments and Azure does
not have one built into their portal um most other ones do but I think the reason why Azure doesn't is because uh
Microsoft owns GitHub and so you could just go over here and use code spaces instead so this to me is like using
Azure okay so we're going to go ahead and create a uh code spaces on Main here and understand as long as the code space
is running we are consuming uh so if you want to stop it you could always go up to the command pallet here and we can
day when we did did that yeah it is code spaces oops it's loading so it's it's bumping things out here but if we go
code spaces there is an option to stop the current Works uh code space so I'm not doing that right now uh I want my
theme to be dark so I'm going to go to this Cog down here go to themes go to color and we'll switch to um GitHub
dark and we'll give it a moment to think because it's a bit slower loading I greatly greatly prefer git pod the
reason I'm using Code spaces is because um the extensions are going to use the official ones with Microsoft and it's
just going to be easier to show you this here in most other courses I use git pod um so anyway once that is loaded and
we uh changed our theme if that matters to you uh we're going to want to do some Azure bicep stuff so on the left hand
side I want you to go to extensions and we're going to search Azure bicep I don't use Azure bicep a lot but I
definitely know how to use it when we need to so I know they have a really good extension for it and it's like it
writes code for you it was it's the nicest experience I've ever had with a um an I uh
tool and this is the thing because Microsoft um built Visual Studio code and you know their they own GitHub they
can have really amazing synergies for developers um so a lot of times I find it easier to work in vs code and use
their extensions to interact with Azure than it is to use Azure portal itself and there are specific services like
Azure functions where you really have to use Visual Studio code so it's it's essential that you get used to using uh
Visual Studio code whether it's local or in a cloud developer environment so I'm installing that this Azure bicep um
extension and this thing will help us write a lot of uh code um uh and it has like templates and other stuff like that
so that's what I want to uh take advantage of it um I don't remember the extension for Azure bicep file so we'll
just go to the Azure bicep website I'm sure they'll have like a quick start and we'll work through it here
together so I want to go here and I just want to know what the extension is it's bicep so that's what it is so we're
going to go here and make a new file we're going to call it main. bicep and something we didn't do before
and this is what I wanted to do in the SDK one but we didn't actually have a use for it is there's probably like an
Azure um like kit Azure tools there we go and this is something you all might want to install a lot of these down
below here see how there's like one for databases resources functions if you install tools it installs all these
other ones you even have one for C tools and um it's possible that uh yeah see like this this one will autocomplete
Azure commands it'll do all sorts of fun stuff even for Azure storage and everything there's something but the
reason we want Azure tools is because we can Azure accounts and this will allow us to quickly log in to our Azure
account remember before we typed in a login and then we had to get a device code and plug that in uh well if we have
this installed uh we can just use the command pallet and log in very quickly and that's what I want to do here so I'm
going to go to the bottom left corner I'm going to pull up command pallet and we'll try to do Azure
signin now to be fair I mean this is not going to do much difference down here but I think it installed the um the C
for us before we had to manually install it right so we'll go here and we're going to have to choose device code we
have signed into Azure Cloud let's try this one first see if that works and then we're going to say uh
just so you know Azure has different ones there's like Azure China Azure Germany Azure US Government so generally
we always want to choose Azure unless you live in one of these other regions and you have to use that one and this is
going to open up and we will sign in we can now close this window and I believe we are now logged into
Azure also notice on the left hand side we have this little Azure icon here in the left hand side it will allow us to
see our resources we can also sign in from here we should be signed in maybe it didn't work I'll try this one more
time extension as your resources wants to sign in we'll say allow maybe it has to sign in separately I don't know we'll
storage accounts and we do have some storage accounts we can see them here this one I think is for our uh Cloud
shell that's why it starts with cs and it has a random number afterwards if we had virtual machines and other things we
could see them here but yeah working with functions you often use them here um we could also probably right click
this and create a resource and so here there's a lot of ways to uh uh create some stuff we create a resource Group
and and some other particular things but uh yeah that's that's interesting I also wonder if like these are these are
obviously uh things that were installed there could be other extensions that we didn't install like Azure uh machine
side so I'm just kind of scrolling through here and seeing what might not be installed so Azure container apps was
install words on here but anyway enough about that let's stay on track here and let's write some Azure
biceps so now that we have uh that installed we want to start writing it so I just need to see a little bit of code
to get a reminder here um so yeah if we start typing it should start suggesting and that's is what we want to do is make
a storage account so let's go ahead and start typing that in so I'm going to type in resource uh and so already
notice that it is autocom completing and then uh let's just go check here so the next thing is going to be storage
account and it should autocomplete so yeah I remember this being really good at Auto completing just give me two
seconds and let's go figure that out one sec yeah so I'm looking at it here and it's supposed to just start
autocompleting when we type so I'm not sure why it's not exactly doing that we'll go back here sometimes Visual
Studio code doesn't do what we want it to do which is totally fine storage account and it's not really autocomp
completing so I'm going to go ahead and save this for a second well there it's autocom
completing so resource storage am I typing it wrong is it supposed to be uh with the resources
no no it's this so it's really interesting that it's not autoc completing but we could just copy this
powerful it was cuz I was so impressed the first time I I saw it uh writing all the code for me which is um not what the
here yeah I guess I was expecting a little bit more from it um but uh whatever I guess we'd have to work with
it a bit more to find out again I don't work with it every single day but I am always very excited to use it um but you
know the thing that I want to uh take a look at is let's look at where the actual reference documentation is and so
we'll go to Resource reference here on the left left hand side and I'm really interested to
see uh where everything is so if you know the resource type you can just type it here in the left hand side that's
fine so we have all our resources here on the left and so we are trying to create a storage
accounts and so here it says to create create a uh to create a resource add the following to your Azure bicep so um just
kind of getting familiar with it it looks like this is going to be its logical name and this is actually going
some that are required and some that are not so it looks like I think that if we can find this tab back
I think we can name this whatever we want here this doesn't actually have to be called storage account so we can go
here it's just a logical name that we reference within this resource so I can go here and say like storage account AB
it shouldn't matter um what's really interesting up here is that it's going to looks like
it's going to grab the resource Group ID and then what does it do here creates a deterministic hash based on the string
so it uses the resource ID to make a random string and then it's going to say toy launch in the front of it so I don't
want toy launch I'm just going to put in here bicep and so um I think we can do hyphens actually I don't think we can
and so we should get in Azure storage account that's going to be bicep and some random value after this and then up
here this is going to pull in the resource Group now what's interesting is we haven't created a resource Group it
says return the current Resource Group scope so I'm not sure how this is going to work if we don't have an existing
Resource Group but anyway we've written our Azure bicep and we probably want to go ahead and uh deploy this now so we
could probably just type in bicep down below that's probably what the command is nope that's not what it is
um maybe it's what is it I don't know so let's go over to um back over here for a second and let's see what the CLI
what the commands are please we could use the command pallet because they're probably all there like if we're if we
were to go into the command pallet and type in bicep like we get all those commands but I I
really want to know what the um CLI commands are I really thought it would be bicep it is a z bicep of course so
looks like we can do um a bicep and then just specify the template probably main is the default so if we do nothing it
probably will pick that up I'm going to go ahead and type in that and so it's interesting we installed the extension
and we were able to log in but we still don't actually have uh the CLI so it looks like we do actually have to
install the Azure CLI in here which is fine it's always great to get more practice so we'll go ahead and type in
Azure CLI install we'll go to the Microsoft learn website we will scroll to uh Linux we'll go to auntu Debian um
because that's generally what get pod or code spaces or what have you will be using we'll go down look for that one
liner and we'll copy it we'll go back over to um code spaces we'll paste this in hit enter and that'll go ahead and
install the uh Azure CLI now we are logged in in this um uh invidual Studio code and it could be storing the same
credential files wherever on the local machine um so maybe we don't have to necessarily log in twice it might be
also interesting to see where that um that file is so maybe we can go ask chat gbt where does um where does okay so
when you log into the Azure CLI where does store the credentials what folder and file on the
about chbt is that you can't go away from uh this tab while it's generating and then sometimes it just uh
gets a bit slow but it goes out to the Internet so I think that's the reason why it's uh been slower than previous I
like previous mod models where it wasn't out on the internet because it could generate out
go to 3.5 let's just ask this one I want it fast okay so it's saying it's in the
Azure folder so that's something that we might want to take a quick look at the Azure CLI is installed I'm going to just
okay so we are not logged in so us logging in here uh wherever it's storing it's definitely not storing in the same
place but let's go take a look at that Azure profile directory so I'm going to bump up the font a bit I realize it's
really small and I'm going to go ahead and say CD and then make a Tilda that is above your tab key and you have to press
shift to make it it's called a Tilda it's like a little squiggly we'll do a for slash I'm going to do period for a
hidden folder and we're start start typing Azure and so that folder is there we'll hit enter I'm going to do LS to
list out the contents and we have stuff here I can do LS hyphen LA to list it in a nice beautiful list and uh if we don't
want no that looks fine and so I'm kind of interested where it's storing uh that configuration probably in the config
directory so I'm going to just CD into config uh oh maybe it's not a folder I thought that was a folder it is not a
and so that doesn't store it does a does the a Json file have it NOP that doesn't have it but to be fair we haven't logged
in yet so maybe the file will appear after we log in so that's okay what I'm going to do is go ahead and type in a
login and we'll say use device code because I believe that's the flag we have to use and we'll go ahead and copy
this link we'll go to the top here and paste it in to one of our available tabs if we have to go back and uh provide
this code and we'll go click next and we'll click on our account we'll say continue
we'll go back over to here and we'll give it a moment to think there we go we are logged in so now what I want to do
is type in LS and looks like there's more stuff there it looks like there is and did we have this before uh
um let's see here 1 two 3 four five six seven 8 nine 10 1 two 3 four 5 6 7 8 9 10 11 12 yeah there's definitely more
stuff here um the new thing that's here is Cloud's config so I'm just going to cat that let's see what's in there so Le
tells us this is our default subscription so we at least know where that's coming from let's cat the config
let's see if anything's Chang in there that's the same let's cat a Jason Json there's still nothing in
there so it'd be really nice to know exactly where it is normally other ones will tell you exactly where they are uh
and you can literally open them up and see see the information there I guess we could also just cat out the session no
nothing so anyway it is stored somewhere where it is I don't know does it really matter no but it's nice to
know exactly where it is so that uh let's say you wanted to delete it off your computer or something but it's
stored somewhere where anyway um we are we should now be logged into Azure so we'll go ahead and type in Azure a bicep
I still feel like it should have told us the commands even if we weren't logged in and I was hoping that it would print
us out the sub commands let's try to do help and see if it actually does that okay so it does you just have to give it
the help flag and so we have a few options we can do build a bicep file um build the bicep pramp file
decompile so like we already have an arm template we could turn it back into a bicep file that sounds really cool I
like that idea format a bicep file you know if there's something wrong with it it could tell us as the format's correct
install uh the bicep CLI which I thought it already was installed let's go ahead and try that first because maybe that's
bicep no so why did I install it if it was already like that um bicep I'm hitting tab autocomplete to see if
anything's there bicep CLI well whatever I mean we kind of still already have it so that's totally
fine so we'll go ahead and type in bicep a uh uh a bicep build and it wants a file totally fine
so we're going to give it a file U build the file build a file and print all of its outputs to SD
out I don't think it matters if we print them out so before we do that I just did control C to uh break that we need to
folder CD for uh Aster for uh forward slash I'm so used to using git pod I can't remember where uh this directory
is I'm just going to scroll up here and take a look it is workspaces okay so to get back to this
here today oh I spelled Azure wrong okay that explains that I'll go fix that in the um the git pod I'll just rename it
but now we're back into here so um typing clear so we're going to go Azure bicep file and we're going to provide it
main TF or I'm thinking terraform bicep we'll hit enter and so that's going to go ahead and build it um is misspelled
it's right there let's go back and and take a look at the documentation and see what it wants sometimes it might want um
uh like the file information but I don't think so it just shows main bicep I'll copy this command
oh I forgot the word build up here that's why so um it looks like it's now built and so we now have a main Json
we'll take a look here and it's generated us an arm template at least that's what it looks
like yep that's what it is so that looks pretty good um I'm noticing that yeah we have parameters up here so
file not interested in that generate prams and install the commands to your CLI oh publish maybe that's it the
publish command adds a module to the registry the Azure container registry exists I'm not sure if that's useful I
think that's if we want to reuse a template for later kind of like how we had those template specs I think
okay so if as your bicep just generates out the files I'd imagine that we just probably deployed the regular way using
arm templates so I'll be back in just a moment most other I tools like they'll let you build end deploy but maybe Azure
bicep just compiles out templates just give me a moment yeah so I just quickly asked um chat gbt and yes it did confirm
it I my suspicions were correct AZ your bicep is just draining out the resource template and it looks like we're going
to have to deploy it the regular uh or oldfashioned way here um so now here's a question could we actually use um Visual
this so that's what I'm really curious about let me go find all right so I believe there's a few ways we can do
this of course we could go and just deploy the uh the fashion way by using this a deployment group um I was looking
around myself and I just typed in bicep here and I also noticed that they had um uh a deploy step so maybe it can do a
direct deploy not maybe not through the CLI but maybe it can do it through Visual St the code there should be one
for arm templates in here it's not showing up I mean chat gbt seems to think that it can but maybe there is a
another extension we're missing so let's type in uh azure so we have Azure tools is there one for
resource manager oh it's an extens well I thought that'd be installed ready then that would be a really useful one to
have here so we'll say Azure resource manager I thought it was there but I didn't see it on the side so this seems
like a really good plugin this is probably something I'd want to have let's go ahead and install this one 1.5
million views yeah I believe so it's by Microsoft I'm surprised it doesn't get installed with the tools oh it's in
preview okay so I imagine when this is out of preview I bet when you install Azure tools it will uh be installed
there now I've said previously if there's a preview tool you should try to avoid it because it might not be there
in the future um this one I think we can kind of get away with utilizing it it might be the future and it's no longer
here no but if we go into our Command pallet let's type in arm deploy now it was saying that uh that something
Command pallet did that yeah so it doesn't show up so you know preview feature uh chbt might be
telling us something else we are using 3.5 so it might not be telling us the full truth there and I don't really see
any changes on here so I guess we're not going to worry about that but I would like to try the um the bicep deploy so
we'll type in just deploy here and scroll on down we we have deploy bicep file let's see what happens
deployment good um create a resource Group because remember that we didn't specify Resource Group so we'll have to
create one on the Fly here so let's do that we'll call this my bicep RG RG for Resource Group it's now going to go
ahead and deploy that it says deploy failed uh provider description does not have the resource type resource
go let's go ask Chacha BT and try to save us some time says check your bicep template file
ensure that you have defined the resources correctly that is not useful um maybe it's because it's not
registered so maybe when we use the CL uh when we use the the UI it will automatically register things when we
use it but we create resource groups all the time so it's kind of surprising if that wouldn't be registered so we'll go
over to our subscription and I'm just going to double check this here but I'm not sure
sometimes there's like another extra um blade they call these blades over here by the way and so I was getting a bit
confused and so Chachi BT is suggesting maybe resources is not registered which to me seems
crazy oh it is it's right there uh so I'm not exactly sure what it's complaining about the other thing is
that maybe as your BP isn't logged in so I mean that seems like a possible option so maybe we'll just go stick with uh the
usual way which is using the Azure um the this Azure deploy method so I'm going to scroll back up here and we have
this command now we did install in here uh in extensions or I think it it came installed was the Azure CLI and what
that will do is it when we write it out Azure CLI commands I believe it will autocomplete for
us so here's scrapbooks for developing running commands with a CLI create an Azure CLI files and use the following
features oh okay okay so there's actually a thing called an Azure CLI file that's new to
me um but what we'll do is we'll go ahead and we'll just say um new file so we just say
commands and I'll just put that there rename that like that that's what it's saying to do and so you'll say uh deploy
an arm template and we'll go back over here and we'll see if it starts to autocomplete
so we have a uh deploy it's not really completing correctly as just demonstrated we'll go
commands scrapbooks for uh developing running commands in the Azure CLI okay then work properly please maybe
it's not installed oh we have to install it sorry I thought we already had it installed
that's why so we'll go back over here and so now our common is showing up so we'll type a
we'll just say my RG my bicep um deployment they'll probably have to have a location
so let's just say probably should place it in the same place as this one this does not have a
particular location so probably default to wherever the resource Group is yep so we say um it's East us there we go
there's probably something else we need I'm not really sure let's go back and see what chat GPT was asking for uh the
file so we'll go here and say main J Json and by the way we could bring these down onto new lines with this backlash
the limit of it there uh so I I assume that it could handle multi-line but I guess when we're
doing multi-line then uh the Intel sense the auto completion is it can't handle it so we'll go back over to here and I
just forgot what we called that Resource Group we actually did did create one uh but Azure bicep did fail the deploy
because of some kind of permissions or settings so I just want to go quickly find that name again notice that
sometimes in Azure you have to be patient super super super common uh to wait around for Azure um because of
propagations in their UI super common my internet's totally fine it's it's Azure and it's back here says an
error occurred when trying to fetch resources additional details from the underlying API might be help helpful are
we having an issue with service so sometimes that happens so we could like uh Microsoft status
page sometimes that happens and I got to walk away come back to my computer but uh maybe it's not because
my internet seems to be funny so but if that's the case then oh yeah it's me I'll be back in a
connection here to my code spaces and I'm going to go back over to here and we'll give it a refresh and so I guess
that Resource Group did not create so we do definitely have to create a resource Group first otherwise it's not going to
running so but uh what I'll do here while we're waiting is I'm going to create a new res group I'm going to call
create and we'll go ahead and create that we'll go back over to our other tab here I would really like it to reopen
here I'm not sure what it's doing to figure this out going to go at the top just type in
it seems real mucked up here so I'm G have to go back and we'll just have to type in
bicep here it's under exam Pro so I'll drop that down of course you'll only have one
so we'll go into here and let's see if I can find that that previously working code space environment so it's here it
is active I'm going to go back and say open in the browser notice you can launch this IND Visual Studio code
locally or if you want to use Jupiter Labs let's say you're doing something with a or something or machine learning
great and um so we created that new Resource Group that's called this here so I'm going to go ahead and copy that
and I'm going to paste it into here sometimes it's good to put double quotations around these things I'm not
doing that unless that gives us problems so I think this is everything we need we need to know um the resource Group uh
it's probably recommended to provide it a name we'll have our location and our template file so I'm going to go ahead
and copy this and fingers cross this works uh of course we didn't really look anything up um oh this one says a
deployment group create so maybe we should make sure that's correct but it did autocomplete create
create hold on we can hover over here and get some stuff manage the AER resource template uh deployment at the
there well what happens if I take this name out then does this still complete starts a deployment at the
subscription scope so maybe both both Works let's just see what happens it's really nice that it shows everything
like that I really like that we'll go ahead and paste that in we'll hit enter and um it's showing we're missing
an argument Resource Group argument it doesn't like that one starts a deployment creates a
deployment the resource Group okay so here we have a create and then we have starts a deployment the
subscription scope creates a deployment at the resource Group from a local file template so it looks like if we already
have one maybe this one would have created a resource Group for us so what we're going to do is take this one out
as I really thought we needed to have it but I'm going to go and see what happens if we do
this the template resource location at line 12 uh line 17 is not is invalid the template function Resource Group is not
got to be really careful when entering stuff in because it's going to give you some trouble and this one looks a little
bit more normal with the group create so we have template file Resource Group I don't know if it needs a
name so maybe I'll just take the name out and then we'll try this one and we'll hit
this one doesn't okay we'll take the location out we'll try this well I guess we don't have to specify a location
again making a lot of trouble here but it's a good way to learn this is how you should learn and this is this
is what Cloud's like just goofing around till get till we get it to work and then hopefully it's it's the right way um so
this is going to go ahead and run and we'll wait here I'm not going to make you uh watch watch it here I'll be back
uh here in a moment okay all right after a little bit of waiting um our terminal has produced some stuff for us so it's
suggesting it probably created the resource so we're in good shape let's go back over to Azure and we're going to go
into our bicep RG and we now have our resource so we've success successfully used um Azure biceps so let's go ahead
and delete this Resource Group we are all done here um we're going to go ahead and commit what we have so that uh any
future folks that are trying to do the same thing as us can just go get that code base uh good stuff here go ahead
hit commit sync the changes we'll say okay that will push the changes excellent and I'm going to want to stop
this workspace we'll open up the command pallet we'll say stop uh so say code spaces stop code
spaces stop current workspace there we go and that will stop the current workspace so that is all good right
there um and I'm going to fix the name here because it really should be named correctly so that you can easily find it
fix them the best you can but yeah that is azure bicep and we'll see you in the next one
into Azure key Vault a pivotal tool to ensure the security of your Cloud applications and services Azure key
helps you Safeguard cryptographic keys and other Secrets used by Cloud apps and services Azure key VA focuses on three
things certificate management this feature allows for easy provision management and deployment of both public
and private SSL certificates these certificates can be used with Azure and internally connected resources Key
Management this enables the creation and control of encryption Keys used to encrypt your data Secrets management
here you have a secure space to store and tightly control access to tokens passwords certificates API keys and
other Secrets note that certificates contain a key pair which is a combination of a key and a secret this
functionalities moving forward let's talk about hsms or Hardware security modules these are dedicated Hardware
devices specifically designed to securely store encryption Keys when it comes to adhering to standards we
reference the federal information processing standard or fips this is a guideline recognized by the US and
Canadian government that specifies the security requirements for cryptographic modules that protect sensitive
information in line with fips we have two levels of compliance for hsms fips 104 diminus 2 level two compliant this
compliance level is for multi-tenant hsms where multiple customers are virtually isolated on a single HSM fips
104 minutus 2 level 3 compliant this level on the other hand pertains to single tenant hsms where One customer
utilizes a dedicated HSM in essence as your key vault is an indispensable tool for ensuring that your cloud data
remains both accessible and secure whether you're working with certificates encryption keys or various Secrets Azure
core of azure key Vault The Vault itself a vault is where your secrets and keys reside safeguarded either by software or
by hsms validated to the standards of fips 1004 toin 2 level two Azure key vaults provides two types of containers
vaults these containers support both software and HSM back Keys HSM pools these are specialized containers solely
for HSM back keys to activate your HSM you will need to provide a minimum of three RSA key pairs up to a maximum of
10 and specify the minimum number of keys required to decrypt the security domain called a quorum you do not choose
the container on creation you just choose between standard and premium when you choose premium and create enough RSA
key pairs you will begin to use a HSM pools diving a bit into technicalities Azure key Vault rest API is used for
programmatically managing Azure key Vault resources allowing you to perform operations such as create a key or
secret import a key or secret revoke a key or secret delete a key or secret authorize user or apps to access its
keys or secrets and monitor and manage key usage Azure key Vault rest API supports three different types of
authentication managed identities and identity managed by a a d recommended as best practice service principle and
certificate this method uses a certificate for authentication Service principle and secret a combination of a
user identity and a secret key one feature to note is the soft delete functionality soft delete allows you to
recover or permanently delete a key vault in secrets for the duration of the retention period this feature is enabled
by default on creation mandatory retention period prevents the permanent deletion of key vaults or Secrets prior
to the retention period elaps furthermore enabling Purge protection safeguards your secrets from being
knowing how your bill for this service can help you make informed decisions and optimize your costs Azure key Vault
offers two pricing tiers standard and premium the notable distinction between the two is that while both tiers support
software protected Keys only the premium tier allows for HSM protected Keys here's a closer look at the pricing
tiers first 250 Keys regardless of whether you're on the standard or premium tier you'll be built $5 per key
every month 25115 Keys the price drops to $2.50 per key monthly again consistent across both
tiers 1501 to 4,000 Keys the cost further reduces to 90 cents for each key every month 4,000 1 plus keys for larger
key volumes Beyond this point you'll be charged at a rate of 40 cents per key per month Secrets operations both tiers
are priced at three cents for every 10,000 transactions involving Secrets certificate operations exclusive to the
premium tier each certificate renewal request is build at $3 managed Azure storage account key rotation this
service only available in the premium tier is priced at $1 per renewal HSM protected Keys specifically for HSM
protected Keys the pricing is further broken down based on the key types for RSA 2048-bit Keys the cost is $1 per key
per month along with an additional charge of 3 cents per 10,000 transactions for RSA 3072 bit and
4096-bit keys as well as ECC Keys the first 250 keys are priced at $5 per key per month so that's an overview of the
pricing model for Azure key Vault the next topic we'll be covering is double encryption for Azure key Vault
before we dive in let's quickly recap infrastructure encryption for storage accounts by default Azure ensures that
your storage account data is encrypted when it's at rest infrastructure encryption adds a second layer of
encryption to your storage accounts data now let's jump into Azure diss double encryption double encryption is
precisely what it sounds like it's where two or more independent layers of encryption are enabled to protect
against compromises of any one layer of encryption this strategy ensures that even if one encryption layer is
compromised the data remains protected by the other Microsoft has a two-layered approach both for data at rest and data
in transit for data at rest disk encryption this is achieved using customer managed keys and infrastructure
encryption this uses platform managed Keys strengthening the base layer and for data in Transit Transit encryption
using transport layer security 1.2 to safeguard data as it travels through networks and an additional layer of
encryption provided at the infrastructure layer so that's a quick overview of double encryption for Azure
when it comes to creating a key in Azure you have three primary choices generate Azure will generate the key for you
import import an existing RSA key that you already possess and restore backup restore a key from backup for Keys
generated by Azure you can use either RSA or EC RSA or rivest shamier Adelman this supports key sizes of 2048 3072 and
4096 bits EC or elliptic curve cryptography here you can select from p256 P 384 p521 or p256 k for Keys
generated by Azure you can set an activation and expiration date additionally you're not bound to a
static version of a key you can create new versions of keys you can also download backups of keys but remember
that backups can only be restored within the same Azure subscription and within Azure key Vault when you have a premium
Vault you'll key options for HSM you can generate either an RSA or EC specifically for HSM or import an RSA
key for HSM as shown in the example now let's talk about Key Management types Microsoft managed key
or Keys managed by Microsoft they do not appear in your Vault and in most cases are used by default for many Azure
services customer managed key are Keys you create in aure key Vault you need to select a key from a vault for various
Services sometimes customer manage means that the customer has imported cryptographic material and any generated
or imported keys are considered cmk and Azure in order to use a key an Azure service needs an identity established
with an Azure ad for permission to access the key from The Vault Additionally you have the option to
implement infrastructure encryption while Azure already encrypt storage account data at Rest by default opting
for infrastructure encryption adds a second layer of security for ifying your storage accounts data even
further the next topic we'll be covering our secrets in Azure key Vault Azure key Vault Secrets provide Secure Storage of
generic Secrets such as passwords and database connection strings key Vault AP has accept and return secret values as
strings internally key Vault stores and manages Secrets as sequences of octets with each secret having a maximum size
of 25k bytes the key Vault service does doesn't provide semantics for Secrets it accepts the data encrypts it stores it
and returns a secret identifier for highly sensitive data clients should consider additional layers of protection
for data for example encrypting your data using a separate protection key before storing it in the key Vault heal
also supports a content type field for Secrets allowing clients to specify the content type of a secret to assist in
interpreting the secret data when it's retrieved note that the maximum length of this field is 255 characters
every secret stored in your key vault is encrypted key vault encrypt Secrets at rest with a hierarchy of encryption Keys
all keys in that hierarchy are protected by modules that are fips 104in 2 compliant the encryption Leaf key is
unique to each key Vault while the root key is unique to the entire security world the protection level may vary
between regions for example chid uses fips 1004 diminus 2 level one and all other regions use level two are higher
diving into secret attributes we have X this is the expiration time after which the secret data should not be retrieved
NBF not before default value is now this defines the time before which the secret data should not be retrieved enable this
tells us whether the secret data can be retrieved or not with its default set to True additionally there are readon
attributes for created an update in order to access Secrets within your application code you can would use the
Azure SDK for example we have a net example in this image here another option is is to use tools like Azure CLI
Vault hey this is Andrew Brown from exam Pro and in this fall along we're going to be learning all about Azure Vault so
let's get to it so what I want you to do is go on the top here and type in key Vault and here we'll have to go ahead
and create ourselves a new Vault and so from there we're going to create a new Resource Group I'm going to call this
Resource Group my example Vault and then we will make a vault key here so I'll say My Vault example which
is kind of funny because this one's slightly different so you've seen I've done this before so I'm going to do my
example vault as the name here and for the region Us East is fine for pricing we'll keep it at standard soft delete is
enabled um and then there's the option for Purge protection so we are going to enable Purge protection and uh this is
going to play into other follow alongs we'll explain that as it goes but Purge protection does not allow you to uh
Purge things uh easily once it's enabled so what we'll do is go ahead and review and
create and we'll go ahead and go review create and we'll give it a moment here and we'll just wait till it's done
deploying okay all right so after a short little wait our vault is created and so what I want you is go to the
resource and we're going to be using this Vault a little bit in some of the Fall alongs and in some cases not so
along we're going to be doing some things with uh keys with an Azure key Vault so what I want you to do is make
your way over to the Keys blade on the left hand side here we're going to generate or slimport a new key we're
key and we are going to choose RSA 2048 that seems totally fine to me everything else seems okay so we'll go ahead and
create that key so we'll give it a moment to create doesn't take too long and then what we're going to do is go on
the left hand side to I am access controls and what we're want going to want to do is add a new Ro assignment so
we can go ahead and start using this uh key so what I want you to do is go and look for key Vault administrator which
is here and we'll go ahead and hit next and then for our uh user we will choose ourself so under user I'm going to
select the members I'm looking for the account I'm using there I am and your brown go ahead and select that there and
so that is all we need to assign it so that we can actually uh work with that key so I think a good idea is to use a
key uh to encrypt a disk so what we'll do is make our way over to dis encryption sets because before you can
encrypt a dis you need to have an encryption set so we'll go ahead and create ourselves a new encryption set
we'll call we'll use the uh sorry the same um resource Crypt so it's very easy cleanup afterwards we'll call this my
disk encrypt set here and in terms of the encryption type we're going to use double encryption because that's much
better you have two keys that encrypted so that's a lot better we are going to choose our vault so we have my example
Vault there's only one option here and in terms of of the key we'll select my dis key terms of the version uh we'll
select the current version we'll go ahead and hit review create and then we will go and create
that and we'll give it a moment to create that encryption set shouldn't take too long here and after a short
little wait uh our resource should be deployed only it took about a minute for me and if we go here it's going to have
this message up here it's very small but it says to associate disk image snapshot this dis encryption set you must Grant
permissions to key Vault so all we have to do is click that alert and will grant permissions and so
now we are able uh to use that key um or like to to we're going to have the permissions issues is solve so what
we'll do is go to type and create a new disk and so we can apply this key to that encryption so we go ahead and
create we're going to choose the same Resource Group here I'm going to call this my example Vault and um or sorry my
example uh dis so that's a little bit more clear than uh that and for the availability Zone doesn't matter for the
source type um it doesn't matter as well in terms of the size we want this to be cheap we're not really using this for
real so we'll use standard HDD and we'll say okay in terms of encryption this is where things get fun we go to double
encryption we choose our key here we'll go ahead review and create and we'll just give it a moment
for that to oh we'll hit create and we'll have to wait a little while here for that create that resource so we'll
just wait until that is created okay and after a very short while the dis is ready so we'll go to that resource we'll
go to the encryption tab to see that encryption is applied so that's all it takes to use a key to encrypt a disk so
we are going to still use some of these accounts there's no clean up yet we go back here and I'll see you in the next
restore key so what I want you to do is go back into the uh Resource Group that we just recently created and we're going
to make our way over to keys so I'm just or sorry we got to get into the Vault first then we'll go over to keys and the
idea is that we have this key here and so um you can see that we have this current version so you can add
additional versions but what's going to happen if we try to back this up so when you back this up you're going to get
this file here and if you open up this file it's going to look like a bunch of gobbly goo so I'm just going to try to
open it here um I have it up off screen here so I'm just trying to open it up within uh Visual Studio code so I'm just
moment all right and so this is the file um that we encrypted uh and you take a look here and it's it's doesn't look
that and just taking a look at the key name this is what it looks like so it says my example Vault my dis key then
there's this um uh date and that's key backup so just recognize that's the format and the date is very useful to
indicate when you backed it up so let's go ahead and delete this key because the idea is we want to uh restore that
backup and so we have deleted that key there and uh what we're going to do is we're going to attempt a Resto so I'm
going to go ahead and go occurred while restoring the key the key you're trying to restore already exists
why would it throw that error we've clearly deleted it and the reason why is that we have Purge protection on we did
that in the um first uh first part when we set up this actual Vault here I'm going to just see if we can find the
settings wherever that Purge protection is I'm trying to remember where it is Purge protection is enabled so we can go
here and once you enable it you cannot turn it off it's going to retain it for a certain amount of days um and so all
you can do is soft delete keys so this key is not actually delete yet if you go to manage deleted Keys you can see the
key is over here and if you try to click on Purge it is disabled because we cannot remove the key because we have
Purge protection on but we can recover the key so we'll go ahead and recover uh and so that will allow us to
recover the key and if we refresh here it's going to take a little bit time for that key to
restore so we'll just have to uh wait a little bit and then it will show up here's one other thing I wanted to
show you was under policies because you know um if you go under where's policies here um or access policies if you look
under our user here and we look at the key permissions um there is an option to purge and we don't actually have that uh
turned on right now but if we were to save this and we were to still go to that Purge option it would still say the
same thing so even if you have Purge permissions it does not matter if Purge protections turned on it still will not
let you purge but you would need a combination of those in order to uh you know be able to do things there so to
really show you how to do that recovery I think what we should do I'm just going to delete our old key here because we
don't care about it but we are going to well I guess we could try to import it into the other ones I'm just going to
undo that for a second but we are going to go ahead and create ourselves another Vault so I'm going to go and type in
Vault at the top here and we're going to be a little bit more careful when we create this Vault so we'll go here and
we will choose um my example Vault I'm going to say My Vault no protect and the pricing tier will be
standard one day we're going to leave it or well seven is the lowest and we'll say disable Purge protection because we
don't want to have that enabled and we'll see if we can import the key into another Vault I'm not sure
if we can do that worst case we'll make a new key download the key reupload it but I'm just curious what would happen
sure all right so this deployment is successful I'm going to go to this resource I'm going to go ahead to go to
create and we're going to restore from backup and we're going to take this key and see if we can actually import it
here so it looks like we can take a key and it can exist in multiple vaults I'm going to go ahead and delete this key
and we're going to say are you sure you want to delete this key I'm going to say yes and if we go to manage
Keys We refresh it takes a little bit of time here so we'll just wait a moment for this to uh
prist and after a short little wait like about 2 minutes I refresh and the key is here so if I go here you'll notice the
purges option is still not available we can obviously recover um but we don't have Purge um protection on so if we go
to access policies over here and we'll go ahead and scroll down and select Purge and save our changes we can then
go back to Keys we'll give it a moment to save we go back to Keys we'll refresh it we'll manage our keys and we'll go
ahead and Purge it and that will permanently Purge it there so that's all it takes uh to do that so there you
secrets are variables that allow you to pass sensitive information to your GitHub action workflows secrets are
access via the secrets context so that's secret St and then whatever you want your secret is and it has a few
different levels we have the organizational level the repo level and the environment level what you need to
understand is that the lower the level the more it overrides the ones from the top level so if you have a secret called
hello at the organization level and one at the environment level called hello the one at the environment level value
will overtake secret names can only contain alpha numeric characters underscores no spaces so that example
underscore names must start with numbers sorry I felt like a rumble in my office and that's why I paused uh I think there
was just a big train that went by anyway sorry about that um H there we go names are case insensitive names must be
unique at the level they are created at people don't know I live right beside a train station and my office is in a shed
uh behind my house and the idea is that I have multiple layers to avoid from the train but sometimes there's nothing you
can do about it uh but anyway so we have passing Secrets as input so you can pass Secrets as inputs by using Secrets
context so that is an example there um but I mean like the point is is that you know you could interpolate whatever you
want there to pass it into a custom action and we'll talk about custom actions another video you can pass
Secrets as Nars so that is another way that you could uh do that and why would you do um inputs versus Nars it just
really depends on uh your use case so maybe you have something that is uh a program you're using and it can only use
narss whereas uh with Secrets it's okay to do that on left hand side because of the the way it works but I just want to
point out here we did this in another video but if you want to make that an Nar you'd have to map it like that I
think we did that for something earlier but uh hopefully you know what that is so about how you set a secret so you can
use the uh GitHub CLI so we have a GH secret set which is probably how we're going to do it then we have GH secret
set for a specific environment or at the org level so depending on how you do a flag it's going to be different the
default apparently is repository and you can also specify the repo which we have here you could do that up here as well
look at the GitHub token secret so the start of each workflow job GitHub automatically creates a unique GitHub
token secret to use in your workflow you can use the GitHub token to authenticate uh in the workflow job uh sounds a bit
repetitive there but let's take a look here and see what we're talking about so I'm going get my pen tool out so it's
very clear um and what I want you to see is that we can say secrets. GitHub token and we can get that GitHub token I also
believe that we can do dollar sign GitHub token for uh environment variables and it will show up as well um
if not we can just map it over as we are doing here if you notice here we're doing that um when you enable GitHub
actions GitHub installs a GitHub app on your repository the GitHub token secret is a GitHub app installation access
token so hopefully that is clear you can also use it with the rest API so here's another example and notice we are
covering Azure monitor so Azure monitor is a comprehensive solution for collecting analyzing and acting on
Telemetry from your cloud and on premises environments it serves as the backbone for gaining insight into the
performance and health of your applications infrastructure and even the network he features visual dashboards A
visual representation of your data smart alerts intelligent notifications based on specific conditions automated actions
set automation based on certain triggers log monitoring track and analyze event logs many Azure Services by default are
already sending Telemetry data to Azure monitor what is observability it's the ability to measure and understand how
internal systems work in order to answer questions regarding performance tolerance security and faults with a
system or application to obtain observability you need to use metrics logs and traces you have to use them
together using them in isolation does not gain you observability metrics a number that is measured over a period of
time for example if we measured the CPU usage and aggregated it over a period of time we could have an average CPU metric
logs a text file where each line contains event data about what happened at a certain time traces a history of
request that travels through multiple apps or services so we can pinpoint performance or failure look like they
should have called it the Triforce of observability the sources of common monitoring data to populate data stores
order by highest to lowest application operating system Azure resources Azure subscription Azure tenant custom sources
the two fundamental data stores are metrics and logs Azure monitor functionalities insights this can be for
applications containers VMS or other Monitoring Solutions visualize using dashboards views power RBI and workbooks
you can create Rich visual presentations of your data Analyze This involves delving deep into metrics analytics and
log analytics respond Based on data Azure monitor can alert you or even autoscale resources integrate extend the
capabilities by using logic apps or export API for more flexibility overall Azure monitor is a comprehensive
solution vital for ensuring that your applications and services run optimally and any issues are detected and dealt
various sources from which Azure monitor collects data application code Azure monitors application insights offers
robust metrics about the performance and functionality of your applications and code you'll get performance traces
application logs and even user Telemetry you'll need to install instrumentation package in your application to collect
data for application insights availability tests measure your application's responsiveness from
different locations on the public internet this helps in assessing the reliability and uptime of your services
nric descriptive data regarding your applications performance operation and custom metrics log store operational
data about your application including page views application requests exceptions and traces you can send
application data to Azure storage for archiving view the details of availability test stored and debug
snapshot data that is captured for a subset of exceptions is stored in Azure storage log analytics agent is installed
for comprehensive monitoring dependency agent collects discovered data about processes running on the virtual machine
and external process dependencies agents can be installed on the OS for VMS running in Azure on premises or other
Cloud providers Diagnostics extension collect performance counters and store them in metrics application insights
logs collect logs and performance counters from the compute resources supporting your application allowing
them to be analyzed alongside other application data the Azure Diagnostics extension always writes to an Azure
storage account while Azure monitor for VMS uses the log analytics agent to store Health State information in a
custom location the Diagnostics extension can also stream data to other locations using aent hubs resource logs
provide insights into the internal operation of an Azure resource and are automatically created however you must
create a diagnostic setting to specify a destination for each resource platform metrics will write to the Azure monitor
metrics database with no configuration you can access platform metrics from metrics Explorer for trending and other
analyzes use log analytics copy platform metrics to logs send resource logs to Azure storage for archiving stream
metrics to other locations using aent hubs Azure subscription this includes Telemetry related to the health and
operation of your Azure subscription Azure service Health provides information about the health of the
Telemetry related to your Azure tenant is collected from tenant wide services such as Azure active directory Azure
active directory reporting contains the history of sign and activity and audit trail of changes made within a
particular tenant for resources that cannot be monitored using the other data sources write this data to either
metrics or logs using an Azure monitor API this will allow you to collect log data from any rest client and store it
Azure monitor is integral to maintaining the health and performance of your applications and resources collecting
two fundamental types of data logs and metrics Azure monitor logs collects and organizes log in performance data from a
variety of monitored resources data consolidation logs can be pulled from diverse sources such as platform logs
from Azure services log and performance data from Agents on Virtual machines and usage and performance data from
applications workspaces all these logs are organized into workspaces providing a centralized repository for in-depth
analysis query language Azure monitor logs offers a sophisticated query language which can quickly analyze
millions of Records making it an ideal choice for complex data analytics log analytics you can interactively work
with log queries and their results using azure's log Analytics tool in contrast Azure monitor metrics collects numeric
data and organizes it into a Time series database here's why that's important numeric dat data metrics are numerical
values captured at regular intervals they are a snapshot that describes a particular aspect of a system at a
specific Moment In Time lightweight metrics are designed to be lightweight allowing for near realtime data analysis
this makes them particularly useful for alerting and the rapid detection of issues metrics Explorer the metrics
Explorer tool allows for interactive analysis of metric data providing a more immediate understanding of your system's
retention and archive policies of azure monitor logs this is an important aspect of your monitoring strategy as it allows
you to control how long your data remains stored and accessible by default in the Azure portal you can set this
retention time anywhere from 30 to 730 days for the whole workspace if you want you can also specify different storage
durations for certain tables within your workspace letting you manage different types of data as needed this gives you
the flexibility to meet any business or regulatory rules about data storage however note that to tweak these
retention settings you have to be on the PID tier of azure monitor logs to set retention and archive policy by table
why navigate to the Azure portal and go to the log analytics workspace where the data is stored two under the settings
section select usage and estimated cost three then select data retention for in the data retention blade you can modify
the retention period for each table by default fault it is set to 31 days but you can extend it up to 730 days five
for archiving data you can use Azure data Explorer which lets you retain data beyond the 2-year limit and gives you a
highly scalable analytic service so that's an overview of the data retention and archive policies of azure monitor
logs you'll most likely encounter a question related to this on the exam so be sure to know
covering Azure log analytics so log analytics is a tool in the Azure portal used to edit and run log queries with
data in Azure monitor logs log analytics processes data from various sources and transforms it into actionable insights
it ingests data from Azure monitor windows and Linux agents Azure services and other sources once the data is
collected you can use log analytics query language to retrieve consolidate and analyze the data log analytics uses
a query language kql now we'll go over some of the benefits of log analytics centralized
log management collect and analyze data from multiple sources both on premises and in the cloud in a centralized
location powerful analytics utilize the custo query language to run Advanced analytics on large amounts of fast
streaming data in real time custom dashboards create custom dashboards and visualizations to display real time data
and Trends integration seamless integration with other Azure services and Microsoft solution Solutions such as
powerbi and Azure Automation and alerting set up alerts based on specific criteria to proactively identify and
unique environment for Azure monitor log data each workspace has its own data repository and configuration and data
sources and solutions are configured to store their data in a particular workspace so that's an overview of azure
agent that can be installed on Windows and Linux machines to collect and send log data to Azure monitor it provides a
way to centralize logs from various sources and enables the analysis of the data using tools like Azure monitor logs
Azure dashboards and Azure monitor workbooks the agent can collect logs from various sources including Windows
event logs custom logs performance counters and CIS log it supports both agent-based and agentless data
environments the log analytics agent is set up to monitor certain Windows event logs like security system or application
logs the data from these logs is then gathered and sent to log analytics for analysis using queries and
visualizations the log analytics agent is set up to monitor CIS log servers or network devices it collects data from
these sources and sends it to log analytics allowing for detailed analysis and troubleshooting both methods for
colle cting log data allow for centralized management and Analysis of log data from multiple sources which can
help to improve visibility and streamline troubleshooting and issue resolution you can expect to see a
question related to log analytics agents and choosing either Windows event logs for a Windows agent or CIS loock for
Linux agent on the exam the next topic will be covering our application insights application
insights is an application performance Management Service and it's a subservice of azure monitor APM is all about the
monitoring and management of performance and availability of software apps it strives to detect and diagnose complex
application performance problems to maintain an expected level of service so why use application insights automatic
detection of performance anomalies application insights automatically identifies performance anomalies in your
system powerful analytics tools it comes with robust analytics tools to help you diagnose issues and understand what
users do with your app continuous Improvement it is designed to help you continuously improve performance and
usability of your applications platform agnostic it works for apps on net node.js Java and python hosted on
premises hybrid or any public Cloud devops integration it can be integrated into your devops process and mobile app
monitoring it can monitor and analyze Telemetry from mobile apps by integrating with visual studio app
center to use application insights you need to instrument your application this involves installing the instrument
package or enabling application insights using the application insights agents were supported there are many ways to
view your Telemetry data apps can be instrumented from anywhere when you set up application insights monitoring for
your web app you create an application insights resource in Microsoft Azure you open this resource in the Azure portal
in order to see and analyze the Telemetry collected from your app the resource is identified by an
instrumentation key what does application insights monitor request rates response times and
failure rates dependency rates response times and failure rates exceptions page views and low performance aex calls user
and session counts performance counters post Diagnostics diagnostic Trace logs and custom events and
metrics where do I see my Telemetry smart detection and manual alerts application map profiler usage analysis
diagnostic search for in data metric Explorer for aurated data dashboards live stream metrics analytics Visual
automatic detection of performance anomalies powerful analytics tools and is designed to help you continuously
into the topic of application insights instrumentation so what is instrumentation in simple terms it's a
way to make your application smarter by adding a few lines of code or in some cases none at all you can monitor how
your app performs and where it might be running into issues you instrument your application by adding the Azure
application insights SDK and implementing traces in the case of a node.js application you can install the
Azure application insights SDK using mpm with the following command and PM install application insights hyphen save
application insights this is the name of the package you are installing which is azure SDK for application insights pyth
and save this flag saves the package as a dependency in your package.json n file here this piece of code lets you
configure what you want to collect Azure supports the following languages net Java python node.js
JavaScript Auto instrumentation allows you to enable application monitoring with application insights without
changing your code this table shows which Azure Services support application ins sites and in what programming
languages the services range from Azure app service on Windows and Linux to Azure functions Azure spring Cloud Azure
kubernetes service and more GA General availability meaning it's fully supported and ready to use public
preview still being tested but you can use it not supported you can't use application insights here through agent
you need to install a special piece of software to use this service o NBD on by default meaning the feature is
automatically enabled through extension available but needs an extension to work we won't go through the entire table but
we'll give a few examples for applications written in.net and hosted on Azure app service on Windows
application insights is generally available and enabled by default for applications written in Python and
hosted on Azure functions application insights is available and enabled by default but for dependencies monitoring
you will need to use an extension so that's an overview of application insights instrumentation
Microsoft Sentinel formerly known as Azure Sentinel Microsoft Sentinel is a scalable Cloud native solution that
encompasses two key functionalities security information event management this is all about
collecting and analyzing security related data to provide real-time analysis of security alerts generated by
this refers to the collection of tools that enable an organization to Define standardize measure and automate
responses to security events Microsoft Sentinel delivers intelligent security analytics and threat intelligence across
the Enterprise providing a single solution for alert detection threat visibility proactive hunting and threat
response with Microsoft Sentinel you can collect data Cloud scale across all users devices applications and
infrastructure both on premises and in multiple cloud clouds detect previously undetected threats and minimize false
positives using Microsoft's analytics and unparalleled threat intelligence investigate threats with
artificial intelligence and hunt for suspicious activities at scale tapping into years of cyber security work at
tasks Microsoft Sentinel comes with a number of connectors for Microsoft Solutions such as Microsoft 365 defender
office 3 365 Azure ad or Microsoft enter ID Microsoft Defender for identity and Microsoft Defender for cloud apps you
can use common event formats CIS logit rest API Windows event logs common event format and trusted automated the
exchange of indicator information one notable feature of Microsoft Sentinel is the ability to
create Azure monitor workbooks workbooks provide a flexible canvas for data analysis and the creation of rich visual
reports within the Azure portal they allow you to tap into multiple data sources from across Azure and combine
them into unified interactive experiences it tells a story about the performance and availability about your
applications and services workbooks are temporary workspaces to define a document like format with visualization
analytics to correlate alerts into incidents incidents are groups of related alerts that together create an
actionable possible threat that you can investigate and resolve Microsoft Sentinels Automation
and orchestration solution provides a highly extensible architecture that enables scalable automation as new
technologies and threats emerge built on the foundation of azure logic apps includes 200 plus connectors for
services Microsoft Sentinel also offers deep investigation tools that help you to understand the scope and find the
root cause of a potential security threat you can choose an entity on the interactive graph to ask interest in
questions for a specific entity and drill down into that entity and its connections to get to the root cause of
based on the miter framework enable you to proactively hunt for security threats across your organization's data sources
insights into possible attacks you can also create custom detection rules based on your query and service those insights
for interesting events enabling you to return to them later share them with others and group them with other
correlating events to create a compelling incident for investigation lastly let's talk about
pricing Microsoft Sentinel has two different pricing models capacity reservations this involves being build a
fixed fee base on the selected tier enabling a predictable total cost for Microsoft Sentinal pay as you go with
this option Bill per gigabyte for the volume of data ingested for analysis in Microsoft Sentinel and stored in the
Azure monitor log analytics workspace and there you have it a comprehensive look at Microsoft Sentinel
a robust seam and Source solution that can help protect your organization's infrastructure applications and
[Music] data let's take a closer look at custo and it's query language so Azure monor
logs is based off of the Azure data Explorer and Along came with it is the custo query language also known as kql
and this is the way we're going to uh filter and uh sort and do things with our logs so custo is based on a
relational database management system and it supports entities such as database tables and columns as also has
this thing called clusters and uh kql actually has a lot of utility in Azure because it's not just in monitor logs
logic apps Powershell Azure monitor log API so it's definitely something you're going to be using across the board in
Azure and so they have some basic operators um uh or they have lots of operators that you can use so you can do
calculated columns searching and filterings on rows Group by agates join functions and we're going to be looking
at a lot of the operators in more detail after this slide here uh but anyway uh the queries execute in the context of a
custo database that is attached to a custo cluster and we will we'll talk about clusters database tables and
um something for custo and so we have a bunch of entities here clusters database tables columns and functions and so I
have this nice visual to help us uh kind of see how they all work together so at the top we have clusters and these are
entities that hold multiple databases you can also have multiple clusters um but it's just not being shown there in
that graphic then you have the databases themselves these are named entities that hold tables and store functions you have
tables these are named entities that hold data and a table has an ordered set of columns and zero or more rows of data
each row holding one data value for each of the columns of the table then there are the columns themselves and these are
named identities that have a scalar data type columns are referen in the query relative to the tabular data stream and
that is the context of the specific op operator referencing them then we have stored functions and these are named
entities that allow reuse of custo queries or query Parts uh and then you got these external tables and these are
tables that uh live outside of your cluster uh I think that they're uh you're referencing them from uh storage
accounts and they're in Blob storage so they I think they could be like CSV files and stuff like that but these
external tables are used for exporting data from custo to external storage so storage uh storage accounts as for
quering external data without ingesting it actually into custo so hopefully that gives you an idea the lay of the Land