Introduction
In the rapidly evolving world of cybersecurity, ensuring a solid understanding of general security concepts is crucial, especially for those preparing for the Security Plus exam. This article is dedicated to Domain 1 of the exam cram series for 2024, where we will explore the various categories and types of security controls, the significance of change management, and the role of cryptographic solutions.
By the end of this article, you will have a better grasp of foundational concepts that will help you succeed in the Security Plus exam and enhance your understanding of security practices in real-world scenarios.
Understanding Categories of Security Controls
Security controls are essential for protecting IT environments and ensuring the integrity, confidentiality, and availability of systems and data.
Types of Security Controls
- Technical Controls: These are hardware and software mechanisms used to protect system resources. Examples include encryption, firewalls, and access control lists.
- Physical Controls: These measures protect the physical premises and assets, including fences, security guards, and surveillance cameras.
- Managerial Controls: These derive from the organization’s security policies and procedures, focusing on risk management. They include security training, hiring practices, and policy enforcement.
- Operational Controls: Focused on daily operations, these controls are often implemented by personnel. They include conducting security awareness training and configuration management.
Types of Security Controls by Function
- Preventive Controls: Intended to stop unauthorized activities (e.g., firewalls, locks).
- Deterrent Controls: Discourage breaches (e.g., security badges, policies).
- Detective Controls: Discover breaches post-incident (e.g., intrusion detection systems).
- Corrective Controls: Resolve issues after a breach has occurred (e.g., restoring backups).
- Compensating Controls: Alternative measures that help mitigate risk (e.g., redundant systems).
- Directive Controls: Policies that instruct and guide actions (e.g., compliance policies).
Invoking Context in Control Types
It's important to note that often a single control can serve multiple purposes depending on the context. For instance, a security camera can act as both a deterrent and a detective control.
The CIA Triad – The Foundation of Cybersecurity
The CIA Triad represents the core principles of cybersecurity:
- Confidentiality: Ensures that sensitive information is only accessible to authorized users.
- Integrity: Guarantees the accuracy and authenticity of data.
- Availability: Ensures that authorized users have access to information when needed.
In addition to the CIA Triad, concepts like non-repudiation, which ensures that actions can be traced back to individuals, and accountability through logging user activities are essential for a robust security framework.
Importance of Change Management in Security
Change management is crucial for maintaining security. It involves processes that govern changes to systems, assets, and configurations. Effective change management includes:
- Requesting Changes: Changes should be formally requested
- Approval Process: Management should review and authorize changes.
- Testing Changes: Validating changes in a test environment prepares organizations for potential challenges.
- Backout Plans: Detailed procedures must be created to revert to previous configurations if issues arise.
- Documenting Changes: Keeping track of modifications and updates is vital for security audits and compliance.
Cryptographic Solutions and Their Relevance
Encryption is essential for protecting information at rest and in transit. The following methods of encryption can help secure sensitive data:
- Symmetric Encryption: Uses a single secret key for both encryption and decryption (e.g., AES).
- Asymmetric Encryption: Utilizes a public-private key pair to secure communications (e.g., RSA).
- Hashing: A one-way function that generates a hash value to verify data integrity (e.g., SHA).
Cryptographic solutions ensure confidentiality, integrity, authenticity, and non-repudiation in communications and data storage.
Conclusion
Understanding the concepts covered in Domain 1 of the Security Plus exam is foundational to your success in the field of cybersecurity. From security controls to change management and cryptographic techniques, each element plays a critical role in protecting digital assets and maintaining robust security practices. Be sure to familiarize yourself with these concepts and apply them in real-world scenarios for comprehensive preparation and to uphold the highest security standards.
welcome to domain 1 of the Security Plus exam cram series 2024 Edition and here in domain 1 we'll focus on General
Security Concepts we'll begin with a look at the categories and types of security controls before moving on to
coverage of an array of fundamental security Concepts we'll explore the impact of change management on security
and we'll close out domain one with a look at the importance of appropriate cryptographic Solutions domain 1 helps
us establish the foundation for everything else we cover in the Security Plus syllabus and as always we'll go
line by line through every skill measured in the official exam syllabus important stuff let's get
installment cover ing every topic in the official exam syllabus for domain one of the Security Plus
exam because it's so often requested I've included a PDF copy of this presentation available for download in
the video description so you can review at your leisure as you prepare for the exam and I've also included a clickable
table of content in the video description so you can move forward and back through topics as necessary as you
prepare and as with the previous release of the Security Plus exam I recommend the the official study guide from Cybex
which includes 500 practice questions 100 flashcards and two practice exams as well as the companion practice
test manual which brings another thousand practice questions and two practice exams and if you register for
the online resources so you can leverage these questions in an electronic format I believe it's all the practice quizzing
you're going to need to prepare yourself for exam day and I will leave you links in the
one where we will focus on General Security Concepts and we're going to go line by line through every topic
mentioned in the official exam syllabus so section 1.1 focuses on comparing and contrasting the various
types of security controls so this is a fairly short but very important section so we'll start with the categories which
include technical managerial operational and physical now what's different here versus past versions of the exam and
other exams out there is the inclusion of the operational category really just a more granular way
of considering the control types which have not changed they are preventive deterrent detective corrective
compensating and directive I'll give you two bits of advice for exam day number one you should know some examples of
each for the exam I'll help there and know that controls can fit into multiple types based on the context of the
situation I see folks get wound up on this fact as they're working through their practice exams and their exam prep
I'll take you through a logical way to think about this to ensure you can get the right answer on control related
we have technical controls these are Hardware or software mechanisms used to manage access to resources and systems
and to provide protection for those resources and systems next we have physical these are security mechanisms
focused on providing protection to the facility and Real World objects then we have managerial which are the policies
and procedures administrative controls really defined by an organization's security policy managerial controls use
planning and assessment methods to review the organization's ability to reduce and manage risk and then we have
that operational category which helps to ensure that the day-to-day operations of an organization comply with their
overall security primarily implemented and executed by people instead of systems I think of operational as people
enforcing the managerial controls supporting physical security and using the technology we've implemented through
technical controls to ensure that we comply with our overall security strategy let me give you some examples
here we'll start with technical we have encryption smart cards passwords Biometrics Access Control list firewalls
routers intrusion detection and prevention again it's the technology next we have the physical guards fences
touch next we have managerial policies and procedures hiring practices background checks data classification
security training risk assessments vulnerability assessments but the focus here is all of these practices laid out
in policies and procedures the organization follows and then we have the operational category which would
include things like conducting the awareness training configuration management media protection the
doing so to summarize those a couple of different ways we have technical which is the implementation of the hardware
documented and then the operational people doing stuff so to visualize these categories we have our assets the focus
of our protection and we have our managerial Technical and physical controls of we're looking at this
historically the policies which give us guidance on the what the technical controls we implementing the hardware
and the software to help with the how and then a layer of physical security around our facilities devices and other
assets it's important to remember there is no security without physical security if I can get into your facility get into
your data center get into your wiring closet there is no technical control that can then stop me there is no
managerial policy that's going to prevent me from doing damage as an attacker now let's insert that
operational layer people Centric activities conducting the awareness training ensuring the backups have
completed making sure the media is stored appropriately so we can use it for Recovery when necessary implementing
the managerial policies supporting the technology and the physical and while these categories are important it's
really the types of controls that are going to come up in questions on the exam but before we dive into control
types I want to sync on the definition of a security control security controls are security measures for countering and
minimizing loss or unavailability of services or apps due to vulnerabilities you'll often hear the
on it safeguards are proactive controls they reduce the likelihood of occurrence and counter measures are reactive they
reduce impact after occurrence of the security event now let's dive into control types we have the deterrent
control which is deployed to discourage violation of security policies preventive controls deployed to
thwart or stop unwanted or unauthorized activity from occurring detective controls deployed to
provide options to other existing controls to Aid in enforcement of sec security policies they're supporting or
return systems to normal after an unwanted or unauthorized activity has occurred and directive which direct
confine or control the actions of subjects to force or encourage compliance with our security policies
and you'll notice I've highlighted the key descriptors of each type along the way here that you'll want to remember
for the exam now let's look at some examples of control types together we have preventive controls
deployed to stop unwanted activity and examples here include fences locks Biometrics alarm systems data
Behavior next we have deterrent controls deployed to discourage violation of security policies this control picks up
where prevention leaves off are examples here locks fences security badges guards lighting cameras alarms separation of
Duty security policies and security awareness training do you notice the overlap in control types here the fact
of the matter is every security control is generally going to fall into one control category but will map to
psychological barrier locks create a visible and tangible barrier even if the lock is unlocked if I have a pad lock
that's even unlocked and hanging there on a gate that sends a signal that not just anybody should be walking through
there and it also conveys increased perceived effort when it is locked it makes the would be trespass or think
twice but stick with me and I'll show you how to navigate the overlap on the exam here in a moment and next we have
detectors job rotation mandatory vacation audit Trails intrusion detection these all allow us to detect
or discover unwanted activity directive which direct confine or control actions policies and
procedures standards guidelines physical signage directing Behavior verbal instructions contracts and agreements
restore systems to normal backups and restores patching antivirus or antimalware forensic analysis
disciplinary action all play a direct or indirect role in returning our systems and environment back to normal and
finally compensating controls which provide options to existing controls to Aid in enforcement and supporting our
security policy they are additional backup supporting controls these could include security policies Personnel
supervision monitoring work task procedures and when I say security policies that could be anything from
it's time to address the overlap we see here in type so we have one control that maps to multiple types or functions and
you saw it in those previous examples a single security control can be identified as multiple types depending
on the context of the situation and that is just a fact of life security controls are designed to work together and their
functions often overlap for example a security camera system is both deterrent it deters unwanted entry and detective
it records potential security incidents for later review if as a deterrent it doesn't do its job successfully so
context matters the classification of a control can depend on how it's implemented and the specific risk it's
addressing so a context based example we have an access control list that can be primarily preventive if it blocks
investigation perhaps the access control list showed that an individual should be granted access to the file repository
but they then deleted sensitive data that shouldn't have been deleted well at that point the activity was logged and
can be investigated later so when we take this knowledge to the exam it comes down to the language exams often use
specific words or phrases to hint at a control type so let's look at some keywords for each of the six types that
you can use to reason your way to the right answer on an exam words like warning aign visibility
perception these indicate a deterrent control preventative Access Control authentication firewall encryption these
all prevent access these are preventive in nature we have policy procedure standard guideline all designed to
direct good behavior so they are directive monitoring auditing logging alerting all designed to detect Behavior
so that's a detective control backup restore incident response patching all correcting negative
conditions a sure sign of a corrective control and alternative backup redundancy supporting all signs of a
compensating control so keep this information in mind and I think security control related questions on the exam
1.2 section 1.2 asks us to summarize fundamental security Concepts so read that as foundational Concepts that apply
called the AAA protocols we'll dig into authenticating people systems and authorization models here the purpose
and outcome of a gap analysis we're going to go deep on zero trust both at the control plane and the
data plane and the language we see here tells us that CompTIA is pulling a page from nist's special publication
security touching on ballards access control vestibules fencing lighting guards cameras and we'll touch on four
types of sensors even if you've been around security for a while those may not all be clear to you so we'll dive
deep on those and we'll wrap up 1.2 with deception and disruption technology the honey pot honey net and supporting
components and what do we see in that physical security topic we see security controls so I want you to be thinking
about control categories and control types as we go through this content reinforcing what you learned in our
previous installment so let's dive into the CIA Triad which as a security professional you should know by heart so
CIA stands for confidentiality integrity and availability we see it represented in the Triangle 1 two and three
beginning with confidentiality so access controls help ensure that only authorized subjects can access objects
we'll dig a little deeper in this session but think of subjects as people and objects as resources such as data
configurations are not modified without authorization that the file sent exactly matches the file received
availability because authorized requests for objects must be granted to subjects within a reasonable amount of time
non-repudiation which guarantees that no one can deny a transaction and the most common method
to provide non-repudiation or digital signatures which prove that a digital messenger document was not
modified intentionally or unintentionally from the time it was signed that document could be an email
actually based on asymmetric cryptography a public private key pair it's the digital equivalent of a
handwritten signature or a stamped seal and it provides non-repudiation in a publicly verifiable manner the public
key stated another way non-repudiation is the ability of one party to defeat or counter a false rejection or refusal of
the other of an obligation with irrefutable evidence so the digital signature on that message on that
parties were involved in the transaction and it cannot be denied side note do remember that shared accounts and
identities prevent non-repudiation simple example if I have a Twitter account and three people have
access using the same credentials I can't prove who posted a tweet ever we'll often hear the concept of AAA
mentioned in the context of several protocols that provide authentication authorization and accounting services
and we'll touch on those protocols here and there throughout the series but I want to focus right now on these three
where the authenticated users are granted access to resources based on the roles and or permissions assigned to
their identity and accounting refers to the methods that track user activity and Records these activities in
and these three concepts really are prefaced by a fourth they go hand in hand so I want to take another pass at
this from a slightly different angle let's talk about identification and authentication so identification is
where a subject claims an identity and identification could be as simple as a username for a user as simple as an
active directory account and authentication again with a subject proves their identity by providing
authentication credentials the matching password for a username for example and this leads to authorization
so after authenticating subjects systems can authorize access to objects based on their proven
identity and then there's accountability auditing logs and audit Trails record events including the identity of the
subject that performed the action so we have authorization that comes after authentication and accountability that
accountability so why is accountability important let's go through the how's to get to the why so accountability is
maintained for individual subjects using auditing logs record user activities and users can be held accountable for their
compliance with the organization security policies generally speaking users are going to behave when they know
their actions are being audited when they are being logged and it provides an audit Trail for
investigation if the fact that we're logging doesn't deter that bad behavior or if heaven forbid we have a security
breach a compromised identity we're going to have that audit Trail so we can go back and piece together the sequence
of events and this discussion can extend beyond users to syst systems and devices as well it's common in modern
Enterprises that systems and devices will have identities also two good examples virtual machines in the cloud
will have a managed identity managed by the platform created and deleted with the VM sharing its life cycle and used
by the VM when it accesses resources such as data so we have an audit Trail and client devices will often have
machine identities in a mobile device platform often tied back to the identity provider platform
and that can be leveraged to make decisions around authentication and authorization of the user on the device
which brings us to our next topic authorization models so you want to be familiar with all these models for the
exam beginning with non-discretionary access control which enables the enforcement of systemwide restrictions
that override object specific Access Control role-based access control is an example of a non-discretionary
authorization model in discretionary access control every object has an owner and the owner can
grant or deny access to any other subject at their discretion this model is considered to be use-based and user
Windows widely used for more than a couple of decades now next we have Ro based access control a key
characteristic of which is the use of roles or groups so instead instead of assigning permissions directly to users
the user accounts are placed in roles and administrators assign privileges to the roles these are typically mapped to
applies Global rules to all subjects the rules within this model are sometimes referred to as
restrictions or filters a good example of rule-based Access Control is a firewall that uses rules that allow or
block traffic to all users equally and finally we have mandatory access control a key point about mandatory access
control is that every object and every subject has one or more labels these labels are predefined and the system
determines access based on assigned labels an example of mandatory access control that comes immediately to mind
is military security where the data owner doesn't set access if data is top secret they don't determine who has top
secret clearance nor is that individual data owner allowed to down classify data so they couldn't down classify data from
access control where access is restricted based on an attribute on the account such as Department location or
department attribute in order to view contracts now just to be sure you're clear for the
exam let's touch on subjects and objects directly key Concepts in Access Control for sure so subjects are the users
groups and services accessing resources known as objects and the objects are the resources files folders shares printers
databases any resources being accessed by the subject and the authorization model determines how a system grants
discussions of access control so just make sure you have them straight in your head for the exam the syllabus also
calls out Gap analysis which is a common task performed on a recurring basis and often in preparation for external audits
so in a gap analysis Auditors will often follow a standard like ISO 271 and compare standard requirements to the
organization's current operations and deficiencies versus the standard will be captured in the audit
report as gaps sometimes called control gaps a control Gap is a discrepancy between the security measures an
organization should have in place versus the controls they actually have in place the outcome is an attestation which is a
formal statement made by the auditor on the controls and processes in place and as to whether or not they are
sufficient and both internal and external Auditors should have Independence in the audit process but at
contestations from external Auditors tend to carry more weight higher confidence because the auditor is not
employed directly by the organization zero trust is called out in great detail in section 1.2 on the
syllabus so zero trust is an approach to security architecture in which no entity is trusted by default and zero trust is
based on three principles assume breach verify explicitly and least privilege access and zero trust has largely
replaced the old trust but verify model which was based on a network perimeter strategy where everything inside the
perimeter was automatically trusted and it's supported by defense and depth that advises a layered approach to security
to think about it another way zero trust really addresses the limitations of that Legacy network perimeter-based security
model it treats identity as the control plane and it assumes compromise and breach in verifying every request again
no entity is trusted by default in zero trust we verify identity we manage devices and apps and we protect data so
let's talk access policy enforcement in the context of zero trust we have the policy enforcement point which is
responsible for enabling monitoring and terminating connections between a subject like a user or a device and an
enterprise resource the policy enforcement Point acts as the Gateway that enforces Access Control
policies so when an access request occurs the policy enforcement Point evaluates the request against
pre-defined policies and applies the necessary controls for example a policy enforcement Point might enforce
multiactor authentication for Access requests from unexpected locations which would imply that enforcement is dynamic
based on conditions and context around the request at the time of the request and then we have the policy decision
point which is where access decisions are made based on various factors like user identity device health and risk
assessment the PDP evaluates the context of an access request and decides whether it should be allowed denied or subjected
to additional controls the policy decision Point considers the five ws who what when where and why but to State it
in short the policy enforcement Point enforces policies at the connection level while the policy decision Point
makes access decisions based on contextual information the exam syllabus calls out several key elements of zero
we have adaptive identity threat scope reduction policy driven Access Control policy administrator and policy engine
you'll need to be familiar with this drives the policy based decision Logic for zero trust in the data plane we have
implicit trust zones subject and system and policy enforcement point so this enforces the decisions defined in the
control plane if you're wondering where these elements of zero trust Network architecture come from they are
described in detail in nist special publication 800-207-6997 adaptive identity changes the way the
system asks a user to authenticate based on the context of the request so the policy decision points going to look at
elements like location the device the user is coming from is that device healthy are they using an approved app
is there any risk associated with this user threat scope reduction is really an end goal of zero trust Network
Control control which are controls based on a user's identity rather than simply their systems location probably the most
popular policy driven Access Control out there is conditional access in Microsoft's entra ID formerly Azure
conditional access policy here in just a moment so you can get a sense of what a system like that looks like and we have
the policy administ ministrator responsible for communicating the decisions made by the policy
engine this is an element of the system not a human person and then we have the policy engine which decides whether to
Grant access to a resource for a given subject another example here is entra ID the identity platform used with Office
365 but the policy administrator and the policy engine together make up the policy decision Point moving
on to the data plane we have implicit trust zones which are part of traditional security approaches in which
firewalls and other security devices formed a perimeter systems belonging to the organization were placed inside the
boundary so we see subject and system called out here the subject is a user who wishes to access a resource and a
system as a nonhuman entity often the device used by the user to access the resource and then we have the policy
enforcement point when a user system requests access to a resource the policy enforcement Point evaluates it against
predefined policies and applies the necessary controls Microsoft entra ID is a good example of a policy enforcement
point so we're going to visualize these Concepts a couple of different ways for context so let's consider conditional
access in entra ID so the system will look at the signals around the request the user their location the device the
application the realtime risk of that user if the user's current risk level is high based on recent activities that's
going to influence the decision it will verify every access attempt it may just allow access if conditions are good it
may require MFA some additional authentication to deal with any concerns around location device or
risk or it may block access altogether but if the user meets the bar if all the conditions of the request are acceptable
they'll gain access to the appson data they're requesting so let's look at a logical
diagram of the zero trust Concepts we've been talking about here so we have the control plane and the data plane and in
the data plane we have the policy enforcement point in the control plane we have the policy decision point which
is comprised of the policy engine and the policy administ rator we have our system and subject
which make the request and the policy enforcement point will enforce the final decision there and if granted give the
subject and system access to the enterprise resource there are certainly many supporting systems and functions
intelligence but these are the core components so the policy enforcement point is is where security controls are
applied it's where they are enforced and the decisions are made in the policy decision point so we talked
about Concepts like adaptive identity so I want to give you a quick tour of conditional access in Microsoft enter ID
so you can see how those conditions around access come together in a policy but again this is just for
context to help you connect the dots Security Plus as vendor agnostic so I'm going to switch over to a browser and
I'll go to the Microsoft entra admin Center and I will look for the conditional access area I will look at
the policies and we'll take a look at an existing policy so exchange online requires compliant device so I'll look
at the settings of this policy so you can get a sense of the conditions you see here I can apply this to specific
wish I can specify the target resources in this case we're targeting exchange online so I could Target this to as much
as all my cloud apps I can go very broad or very narrow and then when I look at my conditions here I see I can look at
the user risk for example so I can make decisions based on the user's risk level and I can look at their signin
risk so if we have concerns about the signin itself but you'll notice here it mentions the signin risk level is
platforms you see I can drill down and apply a policy that applies only to Windows or Mac or Android or iOS for
example and we can look at location so I can exclude trusted locations if I wish maybe I don't want to prompt users for
additional authentication factors when they're on a managed device in a known trusted location like the corporate
office we get sign in fatigue and unhappy users when we're overdoing it in that respect so we have to establish our
boundaries based on our competence and I'll go over here and look at my access control so I can grant
authentication require a specific strength of authentication I can require a device to be marked as compliant or to
be joined to my organization like join to my entra Organization for example I can require an approved app and you'll
notice I can require any one of these selected controls or I can apply them all and say you must meet all of these
conditions and the more sensitive the operation the more likely I'm going to go that route of requiring multiple
conditions in that respect but that's how adaptive identity flows in the Microsoft ecosystem but you'll find
similar Concepts across many platforms out there so if you don't have any exposure hopefully that gives you a bit
of context so let's move on to physical security it's important to remember there is no security without physical
security without control over the physical environment no amount of administrative or technical access
controls can provide adequate security if a militi person can gain physical access to your facility or your
equipment they can do just about anything they want from destruction of property to disclosure and
alteration so physical security is that first outer layer of protection and we'll go through the physical security
controls mentioned in the exam syllabus in order beginning with the Ballard which is a short sturdy vertical
post usually made of concrete steel or some heavy duty material they can be fixed in place or
retractable but they act as physical barriers preventing vehicles from forcibly entering a restricted area
they often delineate pedestrian areas parking lots and sensitive zones to minimize accidental damage but they're
attacks next we have the access control vestibule which is a physical security system comprising a small space with two
interlocking doors only one of which can be open at a time it's designed to strictly control access to highly secure
areas by allowing only one person at a time to pass through this will protect against tailgating where a user slips
through an entry based on someone else's badge when they themselves don't have a badge it's also prevents piggy backing
which is just like tailgating but typically with bad intent to gain access to a rest red area you don't really need
to be too worried about the details between those two they both describe a situation where somebody tries to follow
someone with a badge into a system without using a badge of their own and the access control vestibule will really
help block unauthorized access of any kind you may have previously heard the access control vestibule called a man
trap the naming has been updated in recent years but two names for the same thing so fences are called out on the
exam so let's talk about the characteristics of fences typically efficacy comes down to their height and
their composition so a fence of 3 to 4 ft deters the Casual trespasser a 6 to 7t fence is too
difficult to climb easily it might block Vision which provides additional security if folks standing on the ground
can't see what behind the fence on the other hand an 8ft fence topped with Barb Wire will deter determined
Intruders and then we could even employ what's called a petas a perimeter intrusion detection and assessment
system which will detect someone attempting to climb a fence petus is an expensive control and it may generate
also erect stronger barricades or zigzag paths to prevent a vehicle from ramming a gate so really think of that as a
layered defense as defense in depth where we're adding additional supporting controls compensating controls of a
fashion if we go back to our previous installment so next we have video surveillance so cameras and close
circuit TV systems can provide video surveillance and reliable proof of a person's identity and activity and many
cameras nowadays include motion and object detection capabilities which will kick them into action when necessary
when there's activity to capture that makes combing through camera footage for Meaningful events much easier after the
fact we have security guards a preventive physical security control and they can prevent unauthorized Personnel
from entering a security area they can recognize people and compare an individual's picture ID for people they
these video is detective security guards are preventive access badges are preventive maybe you can see how each
barrier as well as a physical barrier that may deter bad behavior a video camera can do the same thing if someone
sees that video camera they may simply think twice it discourages them from acting with lighting we need to think
about location efficiency and protection so in terms of location installing lights at all entrances and exits to a
efficiency a combination of automation light dimmers and motion sensors can save on electricity cost without
they can even be motion detecting and we need to protect the lights if an attacker can remove the
light bulbs it defeats the control if the attacker can break the light bulb it defeats the control so either place the
lights high enough that they can't be reached or protect them with a metal cage and your lighting is a deterrent
control there are four types of sensors called out in the syllabus the first is infrared which detects heat signatures
in the form of infrared radiation emitted by people animals or objects infrared sensors are often integrated
into security cameras and alarm systems to improve detection capabilities next we have pressure
sensors which are designed to detect changes in pressure on a surface or in a specific area such as a person walking
on a floor or stepping on a mat pressure sensors are used in Access Control Systems to ensure that only authorized
individuals can enter microwave sensors use microwave technology to detect movement within a
specific area they're often used with other types of sensors to reduce false alarms ultrasonic sensors emit high
frequency sound waves and measure the time it takes for the soundwaves to bounce back after hitting an object or a
surface ultrasonic sensors are commonly used in parking assistance robotic navigation and intrusion
detection and in the category of deception and disruption we have the Honeypot honey Poots lure bad people
into doing bad things it lets you watch them but honeypots should only entice not and trap you're not allowed under us
law to let them download items with enti enticement if you want your evidence to be admissible in court for example
allowing them to download a fake payroll file might be considered entrapment the goal of a honey pot is really to
distract from real assets and isolate in a padded cell until you can track them down a group of Honey pots is called a
exam then we have the honey file which is a decoy file to ively named so it attracts the attention of an
attacker then the honey token is a fake record inserted into a database to detect Data Theft these are all intended
to deceive attackers and disrupt attackers and divert them from live networks and allow observation of our
1.3 the 1.3 in the syllabus is explain the importance of change management processes and the impact to security so
we'll be focused on business processes impacting security operations from approval to testing to backout plans
documentation and Version Control so these are really more about what these processes solve for and why do we use
them and we're about to cover every one of them right here so buckle up and I'm going to take one step further right out
of the gate and mention configuration management because when we make changes often we are affecting system or
security related incidents and outages that's our top level goal so to cover off configuration management just
briefly it ensures that systems are configured similarly that configurations are known and documented it ensures that
a true current state is known to all and perhaps more importantly that our intended current state is actually
enforced and in an automated way we possible we can automate some of that using baselining which ensures that
systems are deployed with a common baseline or starting point Imaging is a common baselining method for example in
Virtual machines or even in desktop but I can establish Baseline configurations for just about any
service and in the world of cicd continuous integration and continuous deployment I can often
automate implementation of that Baseline through a pipeline through a devops pipeline and then we have change
changes change management helps reduce risk associated with changes including outages or weakened security from
documented going a step further in change management I want to clarify the difference between change management and
change control you'll often hear these two terms used interchangeably and the difference in their meaning may not
always be clear so Change Control refers to the process of evaluating a change request within an organization and
deciding if it should go ahead in this process request are generally sent to the change Advisory Board often called
the cab to ensure that it is beneficial to the organization so essentially change management is the policy that
process of evaluating a change request to decide if it should be implemented so change management is guidance on the
process and change control is the process in action now let's talk through business processes impacting Security
operation because any change management program should address a few important business processes including approval
which ensures that every proposed change is properly reviewed and cleared by management before it takes place this
ensures alignment across teams and really throughout the organization changes should always have clear
ownership we want to clearly Define who is responsible for each change by designating a primary owner and that
owner will be the key decision maker and sponsor of the change stakeholder analysis identifies
all the individuals and groups within the organization and outside the organization that might be affected by
the change so this enables the team to contact and coordinate with all relevant stakeholders
stakeholders and we have testing which first and foremost confirms that a change will work as expected by
validating it in a test environment before production rollout from a process perspective test results should be
captured in the change approval request this will be one of the core questions every change approval board is going to
ask that same board will also want to talk about your backout plan which provides detailed stepbystep sequences
that the team should follow to roll back if the change goes wrong this ensures systems can be quickly restored to an
operational state if we have a problem and often as a matter of policy organizations won't allow a change to be
approved if it hasn't been tested and if it does not include a backout plan and then we need to think about when a
change should be rolled out which is where maintenance Windows come into play a standing window of time during which
hours there are certainly inconsequential changes that can happen during business hours but when we think
about critical Services it's going to be outside of business hours and often the maintenance window is defined in
customer contracts and when you roll all of these processes up together these elements together can
affects system or data exposure may impact security so we need to make sure we update our documentation our data
flow diagrams and potentially do threat modeling to identify any new attack surfaces and address any new potential
vulnerabilities with appropriate security controls so shifting gears let's talk through the technical
implications that need to be considered as part of the change management process do we need to update allow or
deny lists on our firewall are there any restricted activities here potentially involving sensitive data what are our
expectations of down time any application restarts impact to Legacy applications and what other depend
Tendencies are there in the service chain we need to check all of these boxes in our planning process and at the
end of the day we're looking to address any new exposures even temporary exposures of our data or our
systems why well to avoid service disruptions and security vulnerabilities as system configurations
change attack surfaces may change as well and we need to plan for that throughout the change process so let's
drill down on each of these technical implications we'll start with allow and deny list so firewall rules application
allow deny list Access Control list may all need to be updated some activities may need to be
restricted like data updates during database replication or migration if you have an orders database being updated
and we need to consider any potential downtime because some changes may cause service interruptions which result in
play next application restart so putting controls around risky activities like application and service
restarts whether that's taking a security function offline for its update or taking down a business application we
need to think about how that's going to affect service availability and if we're taking down security related functions
how that affects our security posture during the time that system is offline and then we have to think about
Legacy application so modifications to Legacy apps that may not support some changes like component or service
hybrid Cloud because the advantage of the public cloud is your services are always is up to date and sometimes the
organization is not ready to update certain applications and services and in some cases you may have a legacy
application that's coasting to end of life and so you need to maintain that aging service until the business is
ready to retire it and Legacy applications bring with them special security concerns you know certainly
vulnerabilities because an application that was developed or architected many years in the past was created without
awareness of modern security concerns there are going to be risk factors that the Architects didn't think about or
could not be aware of 10 or 15 years ago then we need to think about dependency so tracking dependencies between systems
and services to identify Downstream effects of current and future changes if I'm updating a backend API or database
am I making a change that's going to impact the applications that leverage that data or that API for
example so let's move on to documentation so documentation helps us understand the current state of and the
changes to our operating environment this is a weak spot of many organizations and a real concern when it
about the way systems and applications are designed and configured it serves as an ongoing reference for current and
not closed out until all documentation and diagrams are updated it is a continuous process across new
deployments and changes and there may be multiple teams involved in keeping documentation of a system or service
fully up to date and we have to remember that do mentation applies not only to the environment but to policies and
procedures that direct operation and support of that environment at the end of the day there
are some upsides and a downside we need to think about from a security perspective so on a positive note
documentation provides benefits to it and security operations to business continuity and Disaster Recovery efforts
accurate picture of current state is going to be helpful to everyone trying to secure and support that system or
service and we need to remember that you cannot fully secure a system or service for which you do not have a true picture
of current state if you are implementing security controls based on inaccurate information you may be leaving security
vulnerabilities open to potential attackers that no one is aware of and we'll close out 1.3 on Version Control
which is a form Al process used to track current versions of software code and system or application
configurations most organizations use a formal version control system that is integrated into their software
is the most widely used version control system in the world invented by lonus talt the creator of Linux
developers modify the code and they check it into a version control system that can identify conflicts in their
changes with those made by other developers and any version control system that is git or based on git is
going to do so with great accuracy it also tracks the current Dev test and production versions of
code and when we think about the dev SEC Ops discipline security is everyone's responsibility so we're going to be
scanning the code that's being checked into that git repository there will likely be multiple
types of security testing involved from very early in the development process and just one of those can be scanning of
code that's checked in to our git repository code for different environments is typically tracked and
git using Code branches we might have a Dev Branch a test Branch a main branch for production
for the exam though focus on the function of Version Control not on any specific Version Control System but if
any version control system is mentioned it's going to be get and that brings us to section 1.4
Solutions we're going to cover public key infrastructure or pki a variety of encryption mechanisms both types of
techniques a number of encryption Concepts including hashing salting digital signatures key stretching
certificates are directly related because we produce certificates from a pki system so I'm going to cover
certificates right after pki to keep these two together to make your job in preparing for exam day a bit easier but
beyond that I'm going to cover everything else in the order presented as I always do let's Dive Right into
public key infrastructure Concepts beginning with key management which is management of cryptographic keys in a
crypto system so operational considerations include dealing with generation exchange storage use and
crypto shredding or destruction and replace of keys if a key is lost or expires from a design perspective we
have to look at cryptographic protocol design key servers user procedures and any related protocols related to
management updates and revocation the certificate authorities create digital certificates and own the
policies related to certificate creation functionality and issuance now a pki hierarchy can include
a single certificate Authority that serves as the route and the issuing CA and manages all the policies but this is
not recommended because if that server is compromised your entire pki hierarchy itself is compromised there's really no
way back from that sort of breach you'll have to start from scratch you may also hear a certificate
Authority called a certification Authority by some vendors Microsoft is one of those just just know those are
two ways of saying the same thing for best security you'll see a three tier pki system with an issuing CA as the
first layer a subordinate or intermediate CA sometimes called a policy CA as the second layer and then a
root certificate Authority at the top so the root CA is usually maintained in an offline State this will typically only
you'll see that subordinate CA sometimes called a policy CA or an intermediate CA multiple names for the same thing and
its role is to issue certificates to new issuing certificate authorities and the issuing CA focuses on exactly that
issuing certificates for clients servers devices websites Etc that represents your chain of trust
hierarchy generally speaking in production you want a two layer hierarchy at minimum so if you have a
breach for example of your issuing CA you can redeploy that without having to start from scratch and in a three-tier
system you could have a breach at the issuing or subordinate levels and still recover by revoking and reissuing new
certificates for subordinate and issuing CA the certificate revocation list contains information about any
certificates that have been revoked due to compromises to the certificate itself or to the pki hierarchy the crl of the
issuing CA contains information on revocation of certificates it has issued to clients devices for websites Etc and
Casa are required to publish crls but it's up to certificate consumers if they check these lists and how they respond
if a certificate has been revoked for example if you have a web application to which clients authenticate with a
certificate it's up to that web application to go check the crl of that pki to see if the certificate is indeed
still valid or it has been revoked for some reason so each certificate revocation list is published to a file
and that client must download that file to check in this file can grow quite large over time in busy
environments and that fact led to the creation of online certificate status protocol or ocp which offers a faster
way to check a certificate status compared to downloading a crl with ocsp the consumer of a
certificate can submit a request to the issuing CA to obtain the status of a specific certificate rather than
downloading that entire list so some other terms related to pki you should be familiar with include the certificate
it's the message that's sent to the CA in order to get a digital certificate created the common name or CN that
appears on a certificate is the fully qualified domain name of the entity represented such as the web
server so I've mentioned online and offline certificate Authority so an online CA is always running an offline
practice for your root certificate Authority there is certificate stapling which is a method used with ocsp which
allows a web server to provide information on the validity of its own certificate it's done by the web server
essentially downloading the ocsp response from the certificate vendor in advance and providing it to
browsers and then there's pinning which is a method designed to mitigate the use of fraudulent certificates once a public
key or certificate has been seen for a specific host that that key or certificate is pinned to the
host and at that point should a different key or certificate be seen for that host that might indicate an issue
with a fraudulent certificate certificate chaining refers to the fact that certificates are
handled by a chain of trust so you purchase a digital certificate from a certificate Authority so you trust that
CA certificate and in turn that CA trust a root certificate in its hierarchy the trust model in a pki is a
model of how different certificate authorities trust each other and how their clients will trust certificates
from other certification authorities the four main types of trust models that are used in pki are bridge
hierarchical hybrid and mesh what you're going to see in your own organization is that hierarchical structure that I
showed you with an issuing CA a policy CA and a root CA in very large organizations or between organizations
that are collaborating in some unique way you may see hybrid or Bridge type trust models where they're creating
trust between their disperate hierarchies hierarchical is going to be the norm and that's what you really need
to think about for the exam they're not going to get into the Weeds on the different trust models but know those
cryptographic key may be lost the concern is usually with symmetric keys or with the private key in asymmetric
Design so if they lose that key there's no way to get the key back and the user can't then decrypt messages and
organizations establish key escros precisely to enable that recovery of lost keys I don't know for certain that
certificate formats will come up on the exam what we' call x.509 certificate formats that's technically the type of
certificates we're dealing with here you'll sometimes hear them called SSL or TLS certificates in reality TLS has
supplanted or replaced SSL so in this table in column two I have the file extension which tips you off to the
format of the certificate in column three tells you if the private key is included in that file remembering if
you're trying to install a certificate on a new device or transfer a certificate it's not whole without the
private key many times what you'll find is a certificate is issued and the private key is marked as not exportable
meaning you can't export the whole certificate and transfer it somewhere else there are a few certificate types
you should know for the exam as well so we have a user certificate which is used to represent a user's digital
identity in most cases a user certificate is mapped back to a user account we have a root certificate which
is a trust anchor in a pki environment it's the root certificate from which the chain of trust is derived that's the
certificate is next. 509 certificate that proves the ownership of a domain name an extended validation certificate
provides a higher level of trust in identifying The Entity that is using the certificate this is common in the
financial services sector when money is on the line it raises the bar but when we look at that
hierarchy the root CA is the root of trust so to recap in a pki the root certificate serves as the trust anchor
as it is the most trusted component of the system and your organization's root certificate will be deployed to your
organization's devices to the list of trusted certificate authorities but generally speaking your ca's root
certificate is only known and trusted within your organization so for external customer facing vendor
facing use cases we need to take a different approach for resources accessed externally you'll buy a
certificate from a trusted third party some examples would include digicert UST Global sign GoDaddy they all offer
certificates for purchase so the root CA in their organizations and their hierarchies will be widely trusted and
generally pre-installed on most devices out there computers and phones and the like in fact let me just show you this
so think of the certificate of a root CA as your rooa trust but let me just show you the trust hierarchy and that roote
of trust in the real world on a device so I've launched the certificate snap in on my computer here and I'm looking at
the certificate store for the local computer so I'll drill down into certificates here and for example I see
a Microsoft InTune MDM device CA so that's a client certificate for my device if I double click on that
certificate I can see when it's valid when it was issued when I look at the details I can scroll down and look at
the enhanced key usage which tells me what it's used for it's used for client authentication so for this client this
device to authenticate to InTune but what about that chain of trust up to the root of trust if I go to certification
path you'll see the certificate the MDM device CA from which it was issued and then the Microsoft InTune root
certification Authority so there is your chain of trust up to the root of trust now what about those trusted third
parties that we would leverage for external facing use cases when we're communicating with entities that need
that trust outside of our organization well if I go bu a certificate from a third party if you see here under
trusted root certification Authority I can see the certificates of trusted root Casa and you will see all those
companies I mentioned diger which has multiple root Casa as you can see UST Global sign GoDaddy and others these
were all pre-installed on this device I didn't have to do anything beyond installing Windows but if I work for
let's say Koso the only reason my root CA certificate will be here is because either one it's integrated ated with
active directory domain Services what we'd call an Enterprise pki at which point it more or less gets installed
automatically or the IT team has through some other means installed that root CA on my device so it is then a trusted
source pki is a pretty complicated subject if you get good at pki in your career early in your career it's going
to serve you well but I hope that clears up some of the basics let's talk through a few more
certificate types we have the Wild Card certificate that can be used for a domain and a subdomain so for example in
the contoso.com domain we have two servers called web and mail the wildcard certificate is an as. contoso.com and
when installed it would work for the fully qualified domain names of both of these in short a wild card certificate
can be used for multiple servers in the same domain which will save us on costs particularly if we're buying
certificates for external facing functions but it supports multiple fqdn in the same domain next we have a code
signing certificate so when code is distributed over the Internet it's essential that users can trust that it
was actually produced by the claimed sender for example an attacker would like to produce a fake device driver or
web component that's actually malware that is claimed to be from some legitimate software vendor using a code
signing certificate to digitally sign the code mitigates this danger of malware because that bad actor won't
have access to the pki organization to produce such a code signing certificate for that software
vendor domain a code signing certificate provides proof of content Integrity next we have a self-sign
certificate which is a certificate issued by the same entity that's using it however it does not not have a
certificate revocation list and cannot be validated or trusted it's the cheapest form of
internal certificates and can be placed on multiple servers you should only use self-signed certificates in test and
development scenarios that should never be used for production generally speaking if you need actual trust a
self-signed certificate is not going to do the job if you need to simulate that trust with a certificate for test and
certificate which is used to identify a computer within a domain email certificates allow users to
digitally sign their emails to verify their identity through the attestation of a trusted third party known as a
certificate Authority and this can allow the users to encrypt the entire contents messages attachments
Etc then we have a thirdparty certificate a certificate issued by a widely trusted external provider such as
GoDaddy or diger this is strongly preferred for TLS on public facing services like a company website because
as you saw in the demo the root of trust for that widely trusted third party is already present on most devices already
trusted by virtually all organizations out there next we have the subject alternative name or sand certificate
which is an extension to the x.509 specification that allows users to specify additional host names for a
single SSL or TLS certificate it's standard practice for SSL certificates and it's on its way to replacing the use
a common name you can also insert other information into a sand certificate like an IP address so we don't even have to
use just names we could use IP addresses as well so this enables support for fqdns from multiple domains in a single
certificate so remember with the Wild Card certificate we could support multiple host names for the same domain
with a sand certificate we can support fqdns for multiple domains and we can add IP addresses in there so we can
navigate to an IP address in a browser we don't even need a name and be aware of certificate expiration certificates
are valid for a limited period from the date of issuance as specified on the certificate the industry standard moves
over time current industry guidance last I checked was a maximum certificate lifetime from widely trusted authorities
like dig assert at 398 days a little over one year will organizations cheat on that and issue for a little longer so
they have to buy certificates less often yes they will particularly if you're buying a subject alternative name
certificate from an external source that supports many names those can get quite expensive into the hundreds of dollars
so I do see folks cheat on the lifetime not a crisis if you're Main in securely but you need to balance cost
and security to be sure next we'll take a look at encryption by level or scope so we'll start with file encryption
which operates at the individual file level meaning files could have unique encryption keys this would be useful for
files containing sensitive info of course this could be financial data protected health information personally
identifiable information or Phi and pii respectively volume encryp which is encryption that targets a
specific partition or volume within the physical drive it's useful when different volumes need varying levels of
protection so in the windows or Linux World think about the data volume versus the system volume you know one where
data lives the other where the operating system lives then we have disk encryption which automatically encrypts
data when it is written to or read from the entire disc this would be bit Locker on Windows DM Crypt on Linux but you can
see that when we look at this from the perspective of scope the scope of encryption at the file level is very low
very granular and the scope of encryption at the dis level is very high we're encrypting everything so basically
the scope is inversely proportional to the granularity and I called out partition or volume there so those are
actually two separate Concepts and I want to touch on those to make sure you're clear in case something comes up
on the exam the partition represents a distinct section of storage on a disc on Windows the C drive is typically a
primary partition it's a distinct physical section of the storage typically volume represents a logical
division of a storage device it represents a single accessible storage area and a volume can span multiple
partitions or discs even but a volume logically assembles one or more partitions into a unified storage area I
don't expect the exam's going to get too wound up around that detail but I wanted to call it out just in case so let's
talk about Drive encryption we have full disc encryption fde for short which is built into the Windows operating system
that's bit locker and bit Locker protects discs volumes and partitions then there's a self-
encrypting Drive which is encryption on a drive that's built into the hardware of the drive itself anything that's
written to that drive is automatically stored in encrypted form and a good self- encrypting Drive should follow the
protecting data at rest here so full disc encryption under the hood uses A System's trusted platform module the TPM
is on the motherboard it's used to store encryption keys so that when a system boots it can compare keys and ensure
that the system has not been tampered with we call this a hardware rout of trust when using certificates for full
dis encryption they use a hardware routed trust that verifies the keys match before the secure boot process
takes place a TPM is a hardware rout of trust now I mentioned self- encrypting drives should use the opal storage
specification which is the industry standard for self- encrypting drives it's a hardware solution that
outperforms software-based Alternatives and they don't have the same vulnerabilities as software and
therefore are generally considered to be more secure they're solid state drives they're purchased already set up to
encrypt data at rest and the encryption keys are stored on the hard drive controller they are immune to a cold
effective in protecting data on lost or stolen devices such as a laptop because only the user and the vendor can decrypt
the data there are a couple of other data at rest scenarios we should touch on one is cloud storage encryption your
cloud service providers your csps like Microsoft Azure Google and Amazon web services they usually protect data at
rest automatically encrypting before persisting it to manage diss blob storage file or Q storage
Amazon went through years of grief because in the early going they didn't automatically encrypt data didn't
automatically protect it at rest which led to some breaches of Aging cloud storage out there that customers didn't
get rid of in a timely fashion then we have transparent data encryption which helps protect SQL database and data
warehouses against the threat of malicious activity with realtime encryption and decryption of database
backups and transaction log files rest without requiring app changes and notice I mentioned it's
realtime encryption and with nearly Zero Performance impact and you'll find this is available for multiple flavors of
relational database Management Systems out there from Microsoft SQL to mySQL to postgressql
most have some form of transparent encryption I may use that CSP acronym more than once in our sessions through
Azure Google Cloud platform and Amazon web services or any other public cloud provider in that vein the syllabus
mentions transport or communication we're talking about data in transit data in transit is most often encrypted with
credit card details in a web transaction for example and while similar in function TLS has largely replaced SSL so
when you see TLS and SSL used interchangeably TLS is really what's typically being used
there TLS is common for encrypting a widespread variety of network communications like VPN as well you'll
also hear data in transit called Data in motion two ways of saying the same thing you may see mention of protecting
occurs when we launch an application like Microsoft Word or Adobe Acrobat apps not running the data from the disc
drive but running the application in Ram in random access memory this is volatile memory meaning that should you power
down the computer the contents are erased but nonetheless in some cases data in memory will be encrypted
one place that comes in mind is with the credential guard feature in Windows they're encrypting your password hashes
and memory so if they're dumped they're not accessible want to revisit data protection and relational databases
because we can go beyond encrypting at the database level many of your relational databases support row or
Fields within the record this is commonly implemented within the database TI I would say it's also
possible in code of your front-end applications if you wanted to do it that way we see masking done that way and to
restate it here briefly with the other relational database encryption options transparent data encryption it's full
database level encryption database files logs backups requires no changes in application comes with virtually no
performance impact and is offered on most relational database management platforms MySQL Microsoft SQL
postgressql Maria DB and it's usually available in Pas versions of these services in the cloud as well so let's
move on to symmetric and asymmetric encryption so symmetric relies heavily on the use of a shared secret key it
lacks support for scalability easy key distribution and non-repudiation so when I say lacks support I mean it does not
support scalability to many users because Distributing that key is challenging that single- shared
key asymmetric which relies on public private key pairs for communications between parties supports scalability
easy key distribution and non-repudiation it doesn't mean one is better than the other it just means
their most useful purpose differs so asymmetric Keys the public keys are shared amongst communicating parties
private keys are kept secret so when we're dealing with data to encrypt a message we use the recipient's public
message you use your own private key to validate a signature other users would use the sender's public key so if you're
the sender they'll use your public key but each party in asymmetric encryption has both a private key and a public key
so how are asymmetric and symmetric encryption commonly used well symmetric is typically used for bulk encryption
encrypting large amounts of data because it can do so very fast with that single shared key asymmetric encryption is used
for distribution of symmetric B encryption keys that shared key we talked about it's commonly used in
services and key agreement so in that respect the two can be used together symmetric algorithms can encrypt large
amounts of data much faster than asymmetric but an asymmetric algorithm can allow us to distribute that shared
key securely to large numbers of parties so I want to show you how those private and public key pairs are used in an
example scenario so here we have Franco and Maria so Franco sends a message to Maria requesting her public key Maria
sends her public key to Franco Franco uses Maria's public key to encrypt the message and he sends it to her Maria
then uses her private key to decrypt the message and this could represent any any number of transactions any number of
client application scenarios but that's how the keys are used everyone else can use your public key to encrypt a message
and you can use your own private key to decrypt which ensures anyone can send you an encrypted message but only you
encryption algorithms so a few common symmetric encryption algorithms we have advanced encryption standard or
AES as it's commonly called it's the current industry gold standard highly efficient widely implemented offers
various key lengths from 128 to 256 bits providing some flexibility and security levels we have triple Dez which is a
variation of the data encryption standard applying the encryption three times triple Dez is being phased out and
replaced by AES where it has not been already other examples there's two fish a finalist in the competition where
ultimately AES was selected known for its flexibility and security and then Blowfish which was the predecessor to
two fish also known for its strength and speed a bit of trivia two fish and Blowfish were both written by Bruce
schneer of schneer on security who's written some of the most popular books on security and encryption ever
published reminder symmetric algorithms are used used for bulk data encryption and if I had to guess which of these
algorithms would be most likely to be mentioned on the exam I would say it's AES it's widely used in the Microsoft
ecosystem a go-to in the US Military and some very high security operations with a 256-bit key I'd guess it'd be that one
so how about some asymmetric encryption algorithms we have RSA one of the oldest and most most widely used asymmetric
algorithms named after its creators often used for key exchange and digital signatures its security relies on the
difficulty of factoring large prime numbers then we have elliptic curve cryptography or ECC which is a more
modern approach using elliptic curves it offers similar security levels to RSA but with smaller key sizes and that is
the key element to remember M with ECC it makes ECC suitable for resource constrained environments think iot
a key exchange protocol allowing two parties to establish a shared secret key over an insecure Channel then elgamal an
algorithm based on the difficulty of the discret logarithm problem used for encryption and digital signatures as for
mention on the exam I'd say any of these could come up elgamal would be the least likely so to revisit our common uses
aes256 would be a common symmetric encryption scenario and then on the asymmetric side we've got RSA Diffy
Helman elliptic curve or ECC a few Cipher types you should be familiar with first is the stream Cipher
which is a symmetric key Cipher where plain text digits are combined with a pseudo random Cipher digit stream also
known as a keyst stream it's basically a sequence of pseudo random bits or digits depending on the system that's generated
by a cryptographic algorithm using a secret key in and some initialization Vector what you really need to remember
is that each plain text digit is encrypted one at a time with the corresponding digit of the keystream to
create a digit of the cite CER text the encrypted data Stream So the plain text is unencrypted
the cipher text is encrypted data then we have a block Cipher which is a method of encrypting text in which a
cryptographic key and algorithm are applied to a block of data for example 64 contiguous bits all at once as a
group rather than to one bit at a time block Cipher is generally considered to be more secure than stream Cipher
next we have the substitution Cipher which uses the encryption algorithm to replace each character or bit of the
plain text message with a different character you don't see these in active use they're really historical at this
which rearranges the order of plain text letters according to a specific rule the message itself is left unchanged just
transposition now let's shift gears and talk cryptographic key length an effective way to increase the strength
of an algorithm is to increase its key length in fact the relationship between key length and work factor is
exponential a small increase in key length leads to a significant ific increase in the amount of work required
primary public key cryptography algorithm used on the internet it supports key sizes of 1024 2048 and 4096
nist recommends a minimum key length of 2048 on the symmetric side we have the gold standard the Advanced encryption
still requires AES 256 for top secret data but remember doubling the key length from a 128bit to a 256bit doesn't
make the key twice as strong it makes it 2 to the 128th times as strong it's all about the number of possible
combinations then we have static versus ephemeral keys so these are the two primary categories of asymmetric
keys so static keys are semi-permanent and they stay the same over a long period of time a certificate includes an
certificate certificates have expiration dates and systems continue to use those keys until the certificate expires and
one to two years is a common certificate lifetime RSA is an example of an algorithm that uses static keys and then
certificate revocation list or the online certificate status protocol then we have ephemeral Keys which are
keys that have very short lifetimes and are recreated for each session so an ephemeral key pair
includes a private ephemeral key and a public ephemeral key and the system uses these key pairs for a single
session and then discards them so some versions of Diffy Helman use ephemeral Keys now we're going to step into the
TPM this is a chip that resides on the motherboard of the device it's multi-purpose for example for storage
and management of keys used for full dis encryption Solutions it provides the operating system with access to keys but
prevents Drive removal and data access in addition to full dis encryption TPM is also leveraged by the
physical Computing device that safeguards and manages digital Keys performs encryption and decryption
functions for digital signature strong authentication and other cryptographic functions it's like a TPM but are often
removable or external devices where the TPM is a chip on the motherboard and it's going
nowhere next we have the hardware rout of trust which is a line of defense against executing unauthorized firmware
on a system and when certificates are used in full dis encryption they use a hardware route of trust for key storage
it verifies that the keys match before the secure boot process takes place so as you might already guess at
this point the trusted platform module and Hardware security module are both implementation of a hardware rooted
trust and next we have the key management system or KMS so your cloud service providers offer a cloud service
for centralized Secure Storage and access for your application Secrets called a vault so the name
varies by Cloud platform so Azure has key Vault AWS has their KMS offering and Google Cloud platform also has a KMS
Vault they call it in this case a secret is anything that you want to control access to it could be API Keys passwords
certificates tokens or cryptographic Keys the service will typically offer programmatic access via API to support
within is generally assumed secrets and keys can generally be protected either by software or by fips
which provides a secure and isolated area within a system or application for processing sensitive
data a secure Enclave uses hardware-based security mechanisms to create an isolated trusted execution
environment it allows sensitive data to be processed and stored securely even in a potentially insecure Computing
alisation we have steganography where computer file message image video is concealed within
another file message image or video and attackers May hide info in this way to exfiltrate sensitive company
data obfuscation Technologies are sometimes called privacy enhancing Technologies but not always used for
vault it's stateless it's stronger than encryption and the keys are not local and then there's pseudo ization which is
a deidentification procedure in which personally identifiable information Fields within the data record are
replaced by one or more artificial identifiers or pseudonyms reversal requires access to another data
source and then we have anonymization which is the process of removing all relevant data so it is impossible to
identify the original subject or person this is only effective if you do not need the identity
data if you want the information about the person so you can establish Trends over time and so forth but you don't
need to know the name of the person or any identifiers related to that person you should be
good next we have data minimization where only necessary data fields required to fulfill the specific purpose
stated purpose and manage retention to meet regulations this is a good practice less sensitive data means less cyber
risk and then we have data masking which is when only partial data is left in the data field for example a credit card may
be shown as asterisk where we only see the last four digits this is commonly implemented in the database tier but
it's also possible in code of your front-end applications but data masking is very common in the database tier and
when you get into the cloud with platform as a service database offerings often times they'll have a data masking
feature where they will recommend a masking strategy for you proactively based on what the service sees in the
database next we have hashing I think it helps to compare hashing to encryption to really appreciate the difference so
encryption is a two-way function what is encrypted can be decrypted with the proper proper key where on the other
hand hashing is a one-way function that scrambles plain text to produce a unique message digest a hash and there's no way
to reverse a hash if properly designed a few common uses of hashing verification of digital
authenticity we can use a hash for file Integrity monitoring and validation of data transfer a file will have a known
hash if a file has been changed that hash will be different at which point in a file Integrity monitoring scenario we
know that something has changed and when we're transferring data when we're transferring a file we can generate a
hash of the file before we transfer and another hash after and compare the two and if they match then the data is
intact its Integrity remains so let's just add those common use of hashing to our list we used previously for
symmetric and asymmetric encryption so we can see from the list here how these work together so we talked about
asymmetric encryption being used as a way to securely transmit that shared key from our symmetric algorithm so
asymmetric algorithms are used for digital signatures and a hash function can verify a digital signature so again
these Technologies working together just to wrap up hashing a good hash function has five requirements so hash functions
must allow input of any length they must provide fixed length output so no matter the length of the input the output will
be the same size they need to make it relatively easy to compute the hash function for any input they provide
oneway functionality meaning when the hash is generated it cannot be reversed and a hash function should be Collision
free and what that means is that no two inputs should ever generate the same output and it's number five that is
precisely the reason why md5 is limited in the scenarios where it's used because it is at some level prone to Collision
for ease of reference as you prepare for the exam I've put the differences between algorithm types into a table for
you so starting with the number of keys a hash algorithm has no keys it's a one-way function symmetric cryptography
is a shared key a single shared key used by any number of parties and in asymmetric cryptography every party in a
discussion has their own public private key pair so it's two keys per times the number of
parties the recommended key length of for for hashing 256 bits for symmetric 128 at the lower end if we think about
a and for asymmetric for our public private key pair 2048 is the nist recommendation 2048 bits common examples
we've got Shaw for hashing symmetric algorithm AES is the gold standard and RSA is one of the oldest asymmetric
algorithms out there today speed so in terms of encrypting data symmetric is very fast for bulk
encryption hashing needs to generate that hash value quickly asymmetric for bulk encryption is going to be
so asymmetric is going to be the most complex of the lot the effect of key compromise so with
hashing there is no key it's a one-way function with symmetric encryption if we lose our key if our key is compromised
everybody in the equation is compromised sender and receiver so the only key you can lose in
asymmetric is your private key and the one who loses in that case is the owner of that private key
so if you have 10 parties in a conversation and party number 10 loses their private key or it's compromised
everybody else is perfectly safe accepting the fact that the holder of that private key for user number 10
the symmetric scenario is challenging because secure transfer of that key to multiple parties is our challenge but
that's easy with asymmetric and then I have examples of each of these algorithms we talked
through many of these for the hash family you're looking at the secure hash algorithm family
frequently and remember on symmetric it's really 128 bits or more for some sensitive data types I believe I
mentioned that in the military for top secret data it's a 256bit key but this is always evolving
and it's eventually going to be affected by Quantum Computing so this is all going to change eventually so hopefully
this makes for a convenient reference page as you're preparing for the exam next we have the process of salting
which involves the use of cryptographic salts attackers may use rainbow tables which contain pre-computed values of
cryptographic hash functions to identify commonly used passwords it's a table of password hashes an assault is random
data that is used as an additional input to that oneway hash function for the password or the passphrase so adding
salt to the passwords before hashing reduces the effectiveness of rainbow table attacks because the expected
output of the hash function for a common password is going to be different because a random value was added so even
if every user in our environment used a common password a rainbow table wouldn't help because the salt changes the output
signatures so digital signatures are similar in concept to handwritten signatures on printed documents that
identify individuals but they provide more Security benefits it's an encrypted hash of a message
encrypted with the sender's private key so in assigned email scenario a digital signature provides three
benefits authentication it positively identifies the sender of the email ownership of a digital signature secret
key is bound to a specific user so that means we also get non-repudiation the sender cannot later
deny sending the message this is sometimes required with online transactions and the fact that that
digital signature secret key is bound to that user gives us non-repudiation and integrity this
provides assurance that the message has not been modified or corrupted that way recipients know that the message was not
measure I want to touch on the digital signature standard or DSS so the digital signature standard uses
Shaw 2 and Shaw 3 message digest functions hashing algorithms to Hash the message it creates a fingerprint of
sorts which is good for the integrity and it also makes it less work for the asymmetric algorithms that
actually create the digital signature to do their work and DSs Works in conjunction with one of three asymmetric
exam but you've got your introduction now just in case so let's talk key stretching so I
want to start with key length so some Cipher Suites are easier to crack than others larger Keys tend to be more
stretching are processes used to take a key that may be weak and to make it stronger by making it longer and more
random a longer key has more combinations a Brute Force attack has to go through to crack since
2015 nist recommends a minimum of a 20 48 bit key for RS a for example that will change over time as computing power
blockchain which was originally the technology that powered Bitcoin but it has broader uses today so it's a
distributed public Ledger that can be used to store Financial medical or other transactions anyone is free to join and
participate it does not use intermediary such as Banks and financial institutions data is chained together
with a block of data holding both the hash for that block and the hash of the preceding block to create a new block on
the Chain the computer that wishes to add the block solves a cryptographic puzzle and sends that solution to the
work next we have the open public Ledger and I think the easiest way to to understand an open public Ledger is to
decentralization blockchain is decentralized it is distributed across a peer-to-peer network with no Central
Authority an open public Ledger on the other hand can be centralized and maintained by a single
entity immutability so blockchain data is immutable and cryptographically secured once data is added to the
blockchain it is extremely difficult to alter whereas data on a public Ledger can be changed more
easily and there's the matter of validation blockchain uses consensus mechanisms like proof of work or proof
of stake to validate new data added to the chain public ledgers rely on the Integrity of the central Authority and
typically fully transparent since section 1.4 is focused on appropriate cryptographic Solutions I
have a bit of a bonus section at the end here for you and I'm going to take you through some common use cases and
limitations that will give you context for applying these Technologies on the exam so let's talk through a few common
scenarios for for specific cryptographic choices low power devices for example these devices often use ECC elliptic
curve cryptography for encryption as it uses a small key iot devices and the little Pi type devices don't have the
processing power for conventional encryption often so ECC fits the bill for that use case low latency this means
encryption and decryption should not take a long time specialized encryption Hardware is a common answer in this
scenario a VPN concentrator or encryption accelerator cards can improve efficiency High resiliency using the
most secure encryption algorithm practical to prevent the encryption key from being cracked by attackers device
application or service compatibility May influence your decisions here one scenario that comes to mind was the key
length on our certificat on our public private key pairs with certain Legacy network devices these Legacy devices
recommended by nist because we had to accommodate for those Legacy devices and supporting confidentiality encryption
should be implemented for exchange of any sensitive data and in a way that ensures only authorized part ities can
file data has not been tampered with and Communications are not altered in transit so we can use a file hash to
obfuscation is commonly used in source code or with data to ensure it can't be read by anyone who steals it there we
have steganography tokenization data masking can all be used to obscure data supporting authentication so we know a
single Factor username and password are not considered secure as theft of the password leads to compromise that's
where MFA multiactor authentication for user authentication certificate based authentication for devices gives us a
sign an email with your private key you cannot deny it was you as there was only one private key it's tied to you
non-repudiation is important in any legally binding transaction we need to ensure neither party can deny having
consented to the transaction and to touch briefly on limitations so speed for example application and Hardware
have to be able to keep Pace with the selected encryption which was why we talked about ECC being such a great fit
for iot scenarios due to its smaller key size size if we're encrypting 16 bytes of data with a block Cipher the
encrypted information is also 16 bytes that overhead has to be considered in resource planning we need enough memory
storage and network to support the result weak Keys We Know larger keys are generally stronger and thus more
difficult to break we need to find the balance balance between security compatibility with our devices and
capacity so in that Network Hardware scenario I mentioned nist recommends the 2048 bit key in our certificate scenario
but we had Legacy network devices that only supported 1024 so we might have to make a decision there between using a
weaker key and replacing that Legacy Hardware time encryption and hashing take time larger amounts of data and
asymmetric encryption take more time than small data and symmetric encryption so your selections need to match time
constraints in transactions and Longevity so consider how long encryption algorithms selected
can be used older algorithms will generally be retired sooner as well scenarios where you select a smaller key
size because you will be impacted by larger compute by Quantum sooner predictability so cryptography
relies on randomization random number generation that can't be easily predicted as crucial for any type of
cryptography reuse we know using the same key is commonly seen in a number of encryption mechanisms and if an attacker
gains access to the key they can decrypt data encrypted with it and while changing those keys out frequently is
stronger key than we would otherwise entropy a measure of Randomness or diversity of a data generating function
so data with full entropy is completely random with no meaningful patterns cryptography relies on that
Randomness and always consider your resource versus security constraints the more secure the encryption Ed the higher
the key length the more process proc in power and memory your server or other device will need just requires a balance
between algorithms and Hardware selections and that's a wrap on domain one of the Security Plus exam cram
series 2024 Edition I hope you're getting value from the series if you have any questions be sure to ping me in
the comments below the video or directly on LinkedIn and I'll join you back here soon for domain 2 and until next time
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
![Understanding Cryptography: Key Agreement and Symmetric Encryption](https://img.youtube.com/vi/eIJzIUhks6E/default.jpg)
Understanding Cryptography: Key Agreement and Symmetric Encryption
Explore the fundamental problems of cryptography including key agreement and symmetric encryption techniques.
![Unlock Your Hacking Potential: A Comprehensive Guide to Security CTFs](https://img.youtube.com/vi/8ev9ZX9J45A/default.jpg)
Unlock Your Hacking Potential: A Comprehensive Guide to Security CTFs
Discover the world of Capture-The-Flag competitions & learn essential hacking skills. Join the fun of solving unique cybersecurity challenges!
![Understanding Modern Cryptography: Foundations of Computational Security](https://img.youtube.com/vi/N6YzR_8zdE0/default.jpg)
Understanding Modern Cryptography: Foundations of Computational Security
Explore modern cryptography's principles, focusing on computational security and efficient algorithms. Dive into the birth of cryptography!
![Understanding Cryptography: Key Agreement and Secure Communication](https://img.youtube.com/vi/eIJzIUhks6E/default.jpg)
Understanding Cryptography: Key Agreement and Secure Communication
Explore the fundamentals of cryptography, including key agreement and secure communication problems.
![Understanding Semantic Security in Cryptography: An In-Depth Analysis](https://img.youtube.com/vi/UVPNA8so2lg/default.jpg)
Understanding Semantic Security in Cryptography: An In-Depth Analysis
Explore the concept of semantic security in cryptography, including key definitions and implications in ciphertext-only attacks.
Most Viewed Summaries
![Pamamaraan ng Pagtamo ng Kasarinlan sa Timog Silangang Asya: Isang Pagsusuri](https://img.youtube.com/vi/rPneP-KQVAI/default.jpg)
Pamamaraan ng Pagtamo ng Kasarinlan sa Timog Silangang Asya: Isang Pagsusuri
Alamin ang mga pamamaraan ng mga bansa sa Timog Silangang Asya tungo sa kasarinlan at kung paano umusbong ang nasyonalismo sa rehiyon.
![A Comprehensive Guide to Using Stable Diffusion Forge UI](https://img.youtube.com/vi/q5MgWzZdq9s/default.jpg)
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
![Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas](https://img.youtube.com/vi/nEsJ-IRwA1Y/default.jpg)
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
![Imperyalismong Kanluranin: Unang at Ikalawang Yugto ng Pananakop](https://img.youtube.com/vi/fJP_XisGkyw/default.jpg)
Imperyalismong Kanluranin: Unang at Ikalawang Yugto ng Pananakop
Tuklasin ang kasaysayan ng imperyalismong Kanluranin at mga yugto nito mula sa unang explorasyon hanggang sa mataas na imperyalismo.
![Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas](https://img.youtube.com/vi/QGxTAPfwYNg/default.jpg)
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.