Who is responsible for the protection of information when it is shared with or provided to other organizations?
Systems owner
Authorizing Official (AO)
Information owner
Security officer
The information owner is the person who has the authority and responsibility for the information within an Information System (IS). The information owner is responsible for the protection of information when it is shared with or provided to other organizations, such as by defining the classification, sensitivity, retention, and disposal of the information, as well as by approving or denying the access requests and periodically reviewing the access rights. The system owner, the authorizing official, and the security officer are not responsible for the protection of information when it is shared with or provided to other organizations, although they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
A minimal implementation of endpoint security includes which of the following?
Trusted platforms
Host-based firewalls
Token-based authentication
Wireless Access Points (AP)
A minimal implementation of endpoint security includes host-based firewalls. Endpoint security is the practice of protecting the devices that connect to a network, such as laptops, smartphones, tablets, or servers, from malicious attacks or unauthorized access. Endpoint security can involve various technologies and techniques, such as antivirus, encryption, authentication, patch management, or device control. Host-based firewalls are one of the basic and essential components of endpoint security, as they provide network-level protection for the individual devices. Host-based firewalls are software applications that monitor and filter the incoming and outgoing network traffic on a device, based on a set of rules or policies. Host-based firewalls can prevent or mitigate some types of attacks, such as denial-of-service, port scanning, or unauthorized connections, by blocking or allowing the packets that match or violate the firewall rules. Host-based firewalls can also provide some benefits for endpoint security, such as enhancing the visibility and the auditability of the network activities, enforcing the compliance and the consistency of the firewall policies, and reducing the reliance and the burden on the network-based firewalls. Trusted platforms, token-based authentication, and wireless access points (AP) are not the components that are included in a minimal implementation of endpoint security, although they may be related or useful technologies. Trusted platforms are hardware or software components that provide a secure and trustworthy environment for the execution of applications or processes on a device. Trusted platforms can involve various mechanisms, such as trusted platform modules (TPM), secure boot, or trusted execution technology (TXT). Trusted platforms can provide some benefits for endpoint security, such as enhancing the confidentiality and integrity of the data and the code, preventing unauthorized modifications or tampering, and enabling remote attestation or verification. However, trusted platforms are not a minimal or essential component of endpoint security, as they are not widely available or supported on all types of devices, and they may not be compatible or interoperable with some applications or processes. Token-based authentication is a technique that uses a physical or logical device, such as a smart card, a one-time password generator, or a mobile app, to generate or store a credential that is used to verify the identity of the user who accesses a network or a system. Token-based authentication can provide some benefits for endpoint security, such as enhancing the security and reliability of the authentication process, preventing password theft or reuse, and enabling multi-factor authentication (MFA). However, token-based authentication is not a minimal or essential component of endpoint security, as it does not provide protection for the device itself, but only for the user access credentials, and it may require additional infrastructure or support to implement and manage. Wireless access points (AP) are hardware devices that allow wireless devices, such as laptops, smartphones, or tablets, to connect to a wired network, such as the Internet or a local area network (LAN). Wireless access points (AP) can provide some benefits for endpoint security, such as extending the network coverage and accessibility, supporting the encryption and authentication mechanisms, and enabling the segmentation and isolation of the wireless network. However, wireless access points (AP) are not a component of endpoint security, as they are not installed or configured on the individual devices, but on the network infrastructure, and they may introduce some security risks, such as signal interception, rogue access points, or unauthorized connections.
Which of the following mechanisms will BEST prevent a Cross-Site Request Forgery (CSRF) attack?
parameterized database queries
whitelist input values
synchronized session tokens
use strong ciphers
The best mechanism to prevent a Cross-Site Request Forgery (CSRF) attack is to use synchronized session tokens. A CSRF attack is a type of web application vulnerability that exploits the trust that a site has in a user’s browser. A CSRF attack occurs when a malicious site, email, or link tricks a user’s browser into sending a forged request to a vulnerable site, where the user is already authenticated. The vulnerable site cannot distinguish between the legitimate and the forged requests, and may perform an unwanted action on behalf of the user, such as changing a password, transferring funds, or deleting data. Synchronized session tokens are a technique to prevent CSRF attacks by adding a random and unique value to each request that is generated by the server and verified by the server before processing the request. The token is usually stored in a hidden form field or a custom HTTP header, and is tied to the user’s session. The token ensures that the request originates from the same site that issued it, and not from a malicious site. Synchronized session tokens are also known as CSRF tokens, anti-CSRF tokens, or state tokens. Parameterized database queries, whitelist input values, and use strong ciphers are not mechanisms to prevent CSRF attacks, although they may be useful for other types of web application vulnerabilities. Parameterized database queries are a technique to prevent SQL injection attacks by using placeholders or parameters for user input, instead of concatenating or embedding user input directly into the SQL query. Parameterized database queries ensure that the user input is treated as data and not as part of the SQL command. Whitelist input values are a technique to prevent input validation attacks by allowing only a predefined set of values or characters for user input, instead of rejecting or filtering out unwanted or malicious values or characters. Whitelist input values ensure that the user input conforms to the expected format and type. Use strong ciphers are a technique to prevent encryption attacks by using cryptographic algorithms and keys that are resistant to brute force, cryptanalysis, or other attacks. Use strong ciphers ensure that the encrypted data is confidential, authentic, and integral.
When developing a business case for updating a security program, the security program owner MUST do
which of the following?
Identify relevant metrics
Prepare performance test reports
Obtain resources for the security program
Interview executive management
When developing a business case for updating a security program, the security program owner must identify relevant metrics that can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. A business case is a document or a presentation that provides the rationale or the argument for initiating or continuing a project or a program, such as a security program, by analyzing and comparing the costs and the benefits, the risks and the opportunities, and the alternatives and the recommendations of the project or the program. A business case can provide some benefits for security, such as enhancing the visibility and the accountability of the security program, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A business case can involve various elements and steps, such as:
Identifying relevant metrics is a key element or step of developing a business case for updating a security program, as it can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. Metrics are measures or indicators that can quantify or qualify the attributes or the outcomes of a process or an activity, such as the security program, and that can provide the information or the feedback that can facilitate the decision making or the improvement of the process or the activity. Metrics can provide some benefits for security, such as enhancing the accuracy and the reliability of the security program, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. Identifying relevant metrics can involve various tasks or duties, such as:
Preparing performance test reports, obtaining resources for the security program, and interviewing executive management are not the tasks or duties that the security program owner must do when developing a business case for updating a security program, although they may be related or possible tasks or duties. Preparing performance test reports is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to verify or validate the functionality and the quality of the security program, according to the standards and the criteria of the security program, and to detect and report any errors, bugs, or vulnerabilities in the security program. Obtaining resources for the security program is a task or a technique that can be used by the security program owner, the security program sponsor, or the security program manager, to acquire or allocate the necessary or the sufficient resources for the security program, such as the financial, human, or technical resources, and to manage or optimize the use or the distribution of the resources for the security program. Interviewing executive management is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to collect and analyze the information and the feedback about the security program, from the executive management, who are the primary users or recipients of the security program, and who have the authority and the accountability to implement or execute the security program.
Who is accountable for the information within an Information System (IS)?
Security manager
System owner
Data owner
Data processor
The data owner is the person who has the authority and responsibility for the information within an Information System (IS). The data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The data owner must also approve or deny the access requests and periodically review the access rights. The security manager, the system owner, and the data processor are not accountable for the information within an IS, but they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Mandatory Access Controls (MAC) are based on:
security classification and security clearance
data segmentation and data classification
data labels and user access permissions
user roles and data encryption
Mandatory Access Controls (MAC) are based on security classification and security clearance. MAC is a type of access control model that assigns permissions to subjects and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the subjects. MAC can provide some benefits for security, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. MAC is commonly used in military or government environments, where the data is classified according to its level of sensitivity, such as top secret, secret, confidential, or unclassified. The subjects are granted security clearance based on their level of trustworthiness, such as their background, their role, or their need to know. The subjects can only access the objects that have the same or lower security classification than their security clearance, and the objects can only be accessed by the subjects that have the same or higher security clearance than their security classification. This is based on the concept of no read up and no write down, which requires that a subject can only read data of lower or equal sensitivity level, and can only write data of higher or equal sensitivity level. Data segmentation and data classification, data labels and user access permissions, and user roles and data encryption are not the bases of MAC, although they may be related or useful concepts or techniques. Data segmentation and data classification are techniques that involve dividing and organizing the data into smaller and more manageable units, and assigning them different categories or levels based on their characteristics or requirements, such as their type, their value, their sensitivity, or their usage. Data segmentation and data classification can provide some benefits for security, such as enhancing the visibility and the control of the data, facilitating the implementation and the enforcement of the security policies and controls, and supporting the audit and compliance activities. However, data segmentation and data classification are not the bases of MAC, as they are not the same as security classification and security clearance, and they can be used with other access control models, such as discretionary access control (DAC) or role-based access control (RBAC). Data labels and user access permissions are concepts that involve attaching metadata or tags to the data and the users, and specifying the rules or the criteria for accessing the data and the users. Data labels and user access permissions can provide some benefits for security, such as enhancing the identification and the authentication of the data and the users, facilitating the implementation and the enforcement of the security policies and controls, and supporting the audit and compliance activities. However, data labels and user access permissions are not the bases of MAC, as they are not the same as security classification and security clearance, and they can be used with other access control models, such as DAC or RBAC. User roles and data encryption are techniques that involve defining and assigning the functions or the responsibilities of the users, and transforming the data into an unreadable form that can only be accessed by authorized parties who possess the correct key. User roles and data encryption can provide some benefits for security, such as enhancing the authorization and the confidentiality of the data and the users, facilitating the implementation and the enforcement of the security policies and controls, and supporting the audit and compliance activities. However, user roles and data encryption are not the bases of MAC, as they are not the same as security classification and security clearance, and they can be used with other access control models, such as DAC or RBAC.
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of which phase?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of the system implementation phase. The SDLC is a framework that describes the stages and activities involved in the development, deployment, and maintenance of a system. The SDLC typically consists of the following phases: system initiation, system acquisition and development, system implementation, system operations and maintenance, and system disposal. The security accreditation task is the process of formally authorizing a system to operate in a specific environment, based on the security requirements, controls, and risks. The security accreditation task is part of the security certification and accreditation (C&A) process, which also includes the security certification task, which is the process of technically evaluating and testing the security controls and functionality of a system. The security accreditation task is completed at the end of the system implementation phase, which is the phase where the system is installed, configured, integrated, and tested in the target environment. The security accreditation task involves reviewing the security certification results and documentation, such as the security plan, the security assessment report, and the plan of action and milestones, and making a risk-based decision to grant, deny, or conditionally grant the authorization to operate (ATO) the system. The security accreditation task is usually performed by a senior official, such as the authorizing official (AO) or the designated approving authority (DAA), who has the authority and responsibility to accept the security risks and approve the system operation. The security accreditation task is not completed at the end of the system acquisition and development, system operations and maintenance, or system initiation phases. The system acquisition and development phase is the phase where the system requirements, design, and development are defined and executed, and the security controls are selected and implemented. The system operations and maintenance phase is the phase where the system is used and supported in the operational environment, and the security controls are monitored and updated. The system initiation phase is the phase where the system concept, scope, and objectives are established, and the security categorization and planning are performed.
What Is the FIRST step in establishing an information security program?
Establish an information security policy.
Identify factors affecting information security.
Establish baseline security controls.
Identify critical security infrastructure.
The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
What can happen when an Intrusion Detection System (IDS) is installed inside a firewall-protected internal network?
The IDS can detect failed administrator logon attempts from servers.
The IDS can increase the number of packets to analyze.
The firewall can increase the number of packets to analyze.
The firewall can detect failed administrator login attempts from servers
An Intrusion Detection System (IDS) is a monitoring system that detects suspicious activities and generates alerts when they are detected. An IDS can be installed inside a firewall-protected internal network to monitor the traffic within the network and identify any potential threats or anomalies. One of the scenarios that an IDS can detect is failed administrator logon attempts from servers. This could indicate that an attacker has compromised a server and is trying to escalate privileges or access sensitive data. An IDS can alert the security team of such attempts and help them to investigate and respond to the incident. The other options are not valid consequences of installing an IDS inside a firewall-protected internal network. An IDS does not increase the number of packets to analyze, as it only passively observes the traffic that is already flowing in the network. An IDS does not affect the firewall’s functionality or performance, as it operates independently from the firewall. An IDS does not enable the firewall to detect failed administrator login attempts from servers, as the firewall is not designed to inspect the content or the behavior of the traffic, but only to filter it based on predefined rules. References: Intrusion Detection System (IDS) - GeeksforGeeks; Exploring Firewalls & Intrusion Detection Systems in Network Security ….
What is the expected outcome of security awareness in support of a security awareness program?
Awareness activities should be used to focus on security concerns and respond to those concerns
accordingly
Awareness is not an activity or part of the training but rather a state of persistence to support the program
Awareness is training. The purpose of awareness presentations is to broaden attention of security.
Awareness is not training. The purpose of awareness presentation is simply to focus attention on security.
The expected outcome of security awareness in support of a security awareness program is that awareness is not training, but the purpose of awareness presentation is simply to focus attention on security. A security awareness program is a set of activities and initiatives that aim to raise the awareness and understanding of the security policies, standards, procedures, and guidelines among the employees, contractors, partners, or customers of an organization. A security awareness program can provide some benefits for security, such as improving the knowledge and the skills of the parties, changing the attitudes and the behaviors of the parties, and empowering the parties to make informed and secure decisions regarding the security activities. A security awareness program can involve various methods and techniques, such as posters, newsletters, emails, videos, quizzes, games, or rewards. Security awareness is not training, but the purpose of awareness presentation is simply to focus attention on security. Security awareness is the state or condition of being aware or conscious of the security issues and incidents, and the importance and implications of security. Security awareness is not the same as training, as it does not aim to teach or instruct the parties on how to perform specific tasks or functions related to security, but rather to inform and remind the parties of the security policies, standards, procedures, and guidelines, and their roles and responsibilities in complying and supporting them. The purpose of awareness presentation is simply to focus attention on security, as it does not provide detailed or comprehensive information or guidance on security, but rather to highlight or emphasize the key or relevant points or messages of security, and to motivate or persuade the parties to pay attention and care about security. Awareness activities should be used to focus on security concerns and respond to those concerns accordingly, awareness is not an activity or part of the training but rather a state of persistence to support the program, and awareness is training, the purpose of awareness presentations is to broaden attention of security are not the expected outcomes of security awareness in support of a security awareness program, although they may be related or possible statements. Awareness activities should be used to focus on security concerns and respond to those concerns accordingly is a statement that describes one of the possible objectives or functions of awareness activities, but it is not the expected outcome of security awareness, as it does not define or differentiate security awareness from training, and it does not specify the purpose of awareness presentation. Awareness is not an activity or part of the training but rather a state of persistence to support the program is a statement that partially defines security awareness, but it is not the expected outcome of security awareness, as it does not differentiate security awareness from training, and it does not specify the purpose of awareness presentation. Awareness is training, the purpose of awareness presentations is to broaden attention of security is a statement that contradicts the definition of security awareness, as it confuses security awareness with training, and it does not specify the purpose of awareness presentation.
Transport Layer Security (TLS) provides which of the following capabilities for a remote access server?
Transport layer handshake compression
Application layer negotiation
Peer identity authentication
Digital certificate revocation
Transport Layer Security (TLS) provides peer identity authentication as one of its capabilities for a remote access server. TLS is a cryptographic protocol that provides secure communication over a network. It operates at the transport layer of the OSI model, between the application layer and the network layer. TLS uses asymmetric encryption to establish a secure session key between the client and the server, and then uses symmetric encryption to encrypt the data exchanged during the session. TLS also uses digital certificates to verify the identity of the client and the server, and to prevent impersonation or spoofing attacks. This process is known as peer identity authentication, and it ensures that the client and the server are communicating with the intended parties and not with an attacker. TLS also provides other capabilities for a remote access server, such as data integrity, confidentiality, and forward secrecy. References: Enable TLS 1.2 on servers - Configuration Manager; How to Secure Remote Desktop Connection with TLS 1.2. - Microsoft Q&A; Enable remote access from intranet with TLS/SSL certificate (Advanced …
Which security access policy contains fixed security attributes that are used by the system to determine a
user’s access to a file or object?
Mandatory Access Control (MAC)
Access Control List (ACL)
Discretionary Access Control (DAC)
Authorized user control
The security access policy that contains fixed security attributes that are used by the system to determine a user’s access to a file or object is Mandatory Access Control (MAC). MAC is a type of access control model that assigns permissions to users and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the users. MAC can provide some benefits for security, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. MAC is commonly used in military or government environments, where the data is classified according to its level of sensitivity, such as top secret, secret, confidential, or unclassified. The users are granted security clearance based on their level of trustworthiness, such as their background, their role, or their need to know. The users can only access the objects that have the same or lower security classification than their security clearance, and the objects can only be accessed by the users that have the same or higher security clearance than their security classification. This is based on the concept of no read up and no write down, which requires that a user can only read data of lower or equal sensitivity level, and can only write data of higher or equal sensitivity level. MAC contains fixed security attributes that are used by the system to determine a user’s access to a file or object, by using the following methods:
Which of the following is the MOST challenging issue in apprehending cyber criminals?
They often use sophisticated method to commit a crime.
It is often hard to collect and maintain integrity of digital evidence.
The crime is often committed from a different jurisdiction.
There is often no physical evidence involved.
The most challenging issue in apprehending cyber criminals is that the crime is often committed from a different jurisdiction. This means that the cyber criminals may operate from a different country or region than the victim or the target, and thus may be subject to different laws, regulations, and enforcement agencies. This can create difficulties and delays in identifying, locating, and prosecuting the cyber criminals, as well as in obtaining and preserving the digital evidence. The other issues, such as the sophistication of the methods, the integrity of the evidence, and the lack of physical evidence, are also challenges in apprehending cyber criminals, but they are not as significant as the jurisdiction issue. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 475; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 544.
Which of the BEST internationally recognized standard for evaluating security products and systems?
Payment Card Industry Data Security Standards (PCI-DSS)
Common Criteria (CC)
Health Insurance Portability and Accountability Act (HIPAA)
Sarbanes-Oxley (SOX)
The best internationally recognized standard for evaluating security products and systems is Common Criteria (CC), which is a framework or a methodology that defines and describes the criteria or the guidelines for the evaluation or the assessment of the security functionality and the security assurance of information technology (IT) products and systems, such as hardware, software, firmware, or network devices. Common Criteria (CC) can provide some benefits for security, such as enhancing the confidence and the trust in the security products and systems, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Common Criteria (CC) can involve various elements and roles, such as:
Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX) are not internationally recognized standards for evaluating security products and systems, although they may be related or relevant regulations or frameworks for security. Payment Card Industry Data Security Standard (PCI-DSS) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the cardholder data or the payment card information, such as the credit card number, the expiration date, or the card verification value, and that applies to the entities or the organizations that are involved or engaged in the processing, the storage, or the transmission of the cardholder data or the payment card information, such as the merchants, the service providers, or the acquirers. Health Insurance Portability and Accountability Act (HIPAA) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the protected health information (PHI) or the personal health information, such as the medical records, the diagnosis, or the treatment, and that applies to the entities or the organizations that are involved or engaged in the provision, the payment, or the operation of the health care services or the health care plans, such as the health care providers, the health care clearinghouses, or the health plans. Sarbanes-Oxley (SOX) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the financial information or the financial reports, such as the income statement, the balance sheet, or the cash flow statement, and that applies to the entities or the organizations
A control to protect from a Denial-of-Service (DoS) attach has been determined to stop 50% of attacks, and additionally reduces the impact of an attack by 50%. What is the residual risk?
25%
50%
75%
100%
The residual risk is 25% in this scenario. Residual risk is the portion of risk that remains after security measures have been applied to mitigate the risk. Residual risk can be calculated by subtracting the risk reduction from the total risk. In this scenario, the total risk is 100%, and the risk reduction is 75%. The risk reduction is 75% because the control stops 50% of attacks, and reduces the impact of an attack by 50%. Therefore, the residual risk is 100% - 75% = 25%. Alternatively, the residual risk can be calculated by multiplying the probability and the impact of the remaining risk. In this scenario, the probability of an attack is 50%, and the impact of an attack is 50%. Therefore, the residual risk is 50% x 50% = 25%. 50%, 75%, and 100% are not the correct answers to the question, as they do not reflect the correct calculation of the residual risk.
A security practitioner is tasked with securing the organization’s Wireless Access Points (WAP). Which of these is the MOST effective way of restricting this environment to authorized users?
Enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point
Disable the broadcast of the Service Set Identifier (SSID) name
Change the name of the Service Set Identifier (SSID) to a random value not associated with the organization
Create Access Control Lists (ACL) based on Media Access Control (MAC) addresses
The most effective way of restricting the wireless environment to authorized users is to enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point. WPA2 is a security protocol that provides confidentiality, integrity, and authentication for wireless networks. WPA2 uses Advanced Encryption Standard (AES) to encrypt the data transmitted over the wireless network, and prevents unauthorized users from intercepting or modifying the traffic. WPA2 also uses a pre-shared key (PSK) or an Extensible Authentication Protocol (EAP) to authenticate the users who want to join the wireless network, and prevents unauthorized users from accessing the network resources. WPA2 is the current standard for wireless security and is widely supported by most wireless devices. The other options are not as effective as WPA2 encryption for restricting the wireless environment to authorized users. Disabling the broadcast of the SSID name is a technique that hides the name of the wireless network from being displayed on the list of available networks, but it does not prevent unauthorized users from discovering the name by using a wireless sniffer or a brute force tool. Changing the name of the SSID to a random value not associated with the organization is a technique that reduces the likelihood of being targeted by an attacker who is looking for a specific network, but it does not prevent unauthorized users from joining the network if they know the name and the password. Creating ACLs based on MAC addresses is a technique that allows or denies access to the wireless network based on the physical address of the wireless device, but it does not prevent unauthorized users from spoofing a valid MAC address or bypassing the ACL by using a wireless bridge or a repeater. References: Secure Wireless Access Points - Fortinet; Configure Wireless Security Settings on a WAP - Cisco; Best WAP of 2024 | TechRadar.
Which of the following is the BEST Identity-as-a-Service (IDaaS) solution for validating users?
Single Sign-On (SSO)
Security Assertion Markup Language (SAML)
Lightweight Directory Access Protocol (LDAP)
Open Authentication (OAuth)
The best Identity-as-a-Service (IDaaS) solution for validating users is Security Assertion Markup Language (SAML). IDaaS is a cloud-based service that provides identity and access management functions, such as authentication, authorization, and provisioning, to the customers. SAML is a standard protocol that enables the exchange of authentication and authorization information between different parties, such as the identity provider, the service provider, and the user. SAML can help to validate users in an IDaaS solution, as it can allow the users to access multiple cloud services with a single sign-on, and provide the service providers with the necessary identity and attribute assertions about the users. Single Sign-On (SSO), Lightweight Directory Access Protocol (LDAP), and Open Authentication (OAuth) are not IDaaS solutions, but technologies or protocols that can be used or supported by IDaaS solutions, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 654; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 437.
After following the processes defined within the change management plan, a super user has upgraded a
device within an Information system.
What step would be taken to ensure that the upgrade did NOT affect the network security posture?
Conduct an Assessment and Authorization (A&A)
Conduct a security impact analysis
Review the results of the most recent vulnerability scan
Conduct a gap analysis with the baseline configuration
A security impact analysis is a process of assessing the potential effects of a change on the security posture of a system. It helps to identify and mitigate any security risks that may arise from the change, such as new vulnerabilities, configuration errors, or compliance issues. A security impact analysis should be conducted after following the change management plan and before implementing the change in the production environment. Conducting an A&A, reviewing the results of a vulnerability scan, or conducting a gap analysis with the baseline configuration are also possible steps to ensure the security of a system, but they are not specific to the change management process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 961; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1013.
At a MINIMUM, audits of permissions to individual or group accounts should be scheduled
annually
to correspond with staff promotions
to correspond with terminations
continually
The minimum frequency for audits of permissions to individual or group accounts is continually. Audits of permissions are the processes of reviewing and verifying the user accounts and access rights on a system or a network, and ensuring that they are appropriate, necessary, and compliant with the policies and standards. Audits of permissions can provide some benefits for security, such as enhancing the accuracy and the reliability of the user accounts and access rights, identifying and removing any excessive, obsolete, or unauthorized access rights, and supporting the audit and the compliance activities. Audits of permissions should be performed continually, which means that they should be conducted on a regular and consistent basis, without any interruption or delay. Continual audits of permissions can help to maintain the security and the integrity of the system or the network, by detecting and addressing any changes or issues that may affect the user accounts and access rights, such as role changes, transfers, promotions, or terminations. Continual audits of permissions can also help to ensure the effectiveness and the feasibility of the audit process, by reducing the workload and the complexity of the audit tasks, and by providing timely and relevant feedback and results. Annually, to correspond with staff promotions, and to correspond with terminations are not the minimum frequencies for audits of permissions to individual or group accounts, although they may be related or possible frequencies. Annually means that the audits of permissions are performed once a year, which may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated more frequently than that, due to various factors, such as role changes, transfers, promotions, or terminations. Annually audits of permissions may also increase the workload and the complexity of the audit process, as they may involve a large number of user accounts and access rights to review and verify, and they may not provide timely and relevant feedback and results. To correspond with staff promotions means that the audits of permissions are performed whenever a staff member is promoted to a higher or a different position within the organization, which may affect their user accounts and access rights. To correspond with staff promotions audits of permissions can help to ensure that the user accounts and access rights are aligned with the current roles or functions of the staff members, and that they follow the principle of least privilege. However, to correspond with staff promotions audits of permissions may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated due to other factors, such as role changes, transfers, or terminations, and they may not be performed on a regular and consistent basis. To correspond with terminations means that the audits of permissions are performed whenever a staff member leaves the organization, which may affect their user accounts and access rights. To correspond with terminations audits of permissions can help to ensure that the user accounts and access rights are revoked or removed from the system or the network, and that they prevent any unauthorized or improper access or use. However, to correspond with terminations audits of permissions may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated due to other factors, such as role changes, transfers, or promotions, and they may not be performed on a regular and consistent basis.
Which of the following is a characteristic of an internal audit?
An internal audit is typically shorter in duration than an external audit.
The internal audit schedule is published to the organization well in advance.
The internal auditor reports to the Information Technology (IT) department
Management is responsible for reading and acting upon the internal audit results
A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as:
Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
Which of the following is the BEST way to reduce the impact of an externally sourced flood attack?
Have the service provider block the soiree address.
Have the soiree service provider block the address.
Block the source address at the firewall.
Block all inbound traffic until the flood ends.
The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525.
Which of the following is considered a secure coding practice?
Use concurrent access for shared variables and resources
Use checksums to verify the integrity of libraries
Use new code for common tasks
Use dynamic execution functions to pass user supplied data
A secure coding practice is a technique or guideline that aims to prevent or mitigate common software vulnerabilities and ensure the quality, reliability, and security of software applications. One example of a secure coding practice is to use checksums to verify the integrity of libraries. A checksum is a value that is derived from applying a mathematical function or algorithm to a data set, such as a file or a message. A checksum can be used to detect any changes or errors in the data, such as corruption, modification, or tampering. Libraries are collections of precompiled code or functions that can be reused by software applications. Libraries can be static or dynamic, depending on whether they are linked to the application at compile time or run time. Libraries can be vulnerable to attacks such as code injection, code substitution, or code reuse, where an attacker can alter or replace the library code with malicious code. By using checksums to verify the integrity of libraries, a software developer can ensure that the libraries are authentic and have not been compromised or corrupted. Checksums can also help to identify and resolve any errors or inconsistencies in the libraries. Other examples of secure coding practices are to use strong data types, input validation, output encoding, error handling, encryption, and code review.
Digital certificates used in Transport Layer Security (TLS) support which of the following?
Information input validation
Non-repudiation controls and data encryption
Multi-Factor Authentication (MFA)
Server identity and data confidentially
Digital certificates are electronic documents that contain the public key of an entity and are signed by a trusted third party, called a Certificate Authority (CA). Digital certificates are used in Transport Layer Security (TLS), a protocol that provides secure communication over the Internet, by enabling the following functions:
What is the process of removing sensitive data from a system or storage device with the intent that the data cannot be reconstructed by any known technique?
Purging
Encryption
Destruction
Clearing
Purging is the process of removing sensitive data from a system or storage device with the intent that the data cannot be reconstructed by any known technique. Purging is also known as sanitization, erasure, or wiping, and it is a security measure to prevent unauthorized access, disclosure, or misuse of the data. Purging can be performed by using software tools or physical methods that overwrite, degauss, or destroy the data and the storage media. Purging is required when the system or storage device is decommissioned, disposed, transferred, or reused, and the data is no longer needed or has a high level of sensitivity or classification. Encryption, destruction, and clearing are not the same as purging, although they may be related or complementary processes. Encryption is the process of transforming data into an unreadable form by using a secret key or algorithm. Encryption can protect the data from unauthorized access or disclosure, but it does not remove the data from the system or storage device. The encrypted data can still be recovered if the key or algorithm is compromised or broken. Destruction is the process of physically damaging or disintegrating the system or storage device to the point that it is unusable and irreparable. Destruction can prevent the data from being reconstructed, but it may not be feasible, cost-effective, or environmentally friendly. Clearing is the process of removing data from a system or storage device by using logical techniques, such as overwriting or deleting. Clearing can protect the data from unauthorized access by normal means, but it does not prevent the data from being reconstructed by using advanced techniques, such as forensic analysis or data recovery tools.
What is the foundation of cryptographic functions?
Encryption
Cipher
Hash
Entropy
The foundation of cryptographic functions is entropy. Entropy is a measure of the randomness or unpredictability of a system or a process. Entropy is essential for cryptographic functions, such as encryption, decryption, hashing, or key generation, as it provides the security and the strength of the cryptographic algorithms and keys. Entropy can be derived from various sources, such as physical phenomena, user input, or software applications. Entropy can also be quantified in terms of bits, where higher entropy means higher randomness and higher security. Encryption, cipher, and hash are not the foundation of cryptographic functions, although they are related or important concepts or techniques. Encryption is the process of transforming plaintext or cleartext into ciphertext or cryptogram, using a cryptographic algorithm and a key, to protect the confidentiality and the integrity of the data. Encryption can be symmetric or asymmetric, depending on whether the same or different keys are used for encryption and decryption. Cipher is another term for a cryptographic algorithm, which is a mathematical function that performs encryption or decryption. Cipher can be classified into various types, such as substitution, transposition, stream, or block, depending on how they operate on the data. Hash is the process of generating a fixed-length and unique output, called a hash or a digest, from a variable-length and arbitrary input, using a one-way function, to verify the integrity and the authenticity of the data. Hash can be used for various purposes, such as digital signatures, message authentication codes, or password storage.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
The development team has been tasked with collecting data from biometric devices. The application will support a variety of collection data streams. During the testing phase, the team utilizes data from an old production database in a secure testing environment. What principle has the team taken into consideration?
biometric data cannot be changed.
Separate biometric data streams require increased security.
The biometric devices are unknown.
Biometric data must be protected from disclosure.
The principle that the development team has taken into consideration when using data from an old production database in a secure testing environment is that biometric data must be protected from disclosure. Biometric data is a type of data that is derived from the physical or behavioral characteristics of a person, such as fingerprints, iris patterns, or voice recognition. Biometric data is used for identification or authentication purposes, and it is considered as sensitive or personal data that should be protected from unauthorized or malicious access, modification, or disclosure. The development team has taken this principle into consideration when they used data from an old production database in a secure testing environment, as they ensured that the biometric data was not exposed or compromised during the testing phase of the application. Biometric data cannot be changed, separate biometric data streams require increased security, or the biometric devices are unknown are not the principles that the development team has taken into consideration when using data from an old production database in a secure testing environment. Biometric data can be changed, as it may vary due to aging, injury, or disease, and it may need to be updated or replaced. Separate biometric data streams do not necessarily require increased security, as it depends on the type, quality, and purpose of the biometric data. The biometric devices are not unknown, as the development team should be aware of the specifications, capabilities, and limitations of the biometric devices that they are using for the application. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 5: Identity and Access Management, page 407.
All hosts on the network are sending logs via syslog-ng to the log collector. The log collector is behind its own firewall, The security professional wants to make sure not to put extra load on the firewall due to the amount of traffic that is passing through it. Which of the following types of filtering would MOST likely be used?
Uniform Resource Locator (URL) Filtering
Web Traffic Filtering
Dynamic Packet Filtering
Static Packet Filtering
Static packet filtering is a type of filtering that examines the header of each packet and allows or denies it based on a set of predefined rules or criteria, such as the source and destination IP addresses, ports, protocols, or flags. Static packet filtering is simple, fast, and stateless, meaning that it does not keep track of the state or the context of the packets or the connections. Static packet filtering can be used to reduce the load on the firewall by filtering out unwanted or malicious traffic before it reaches the log collector. Uniform Resource Locator (URL) filtering is a type of filtering that blocks or allows access to specific websites or web pages based on their URLs or keywords. Web traffic filtering is a type of filtering that analyzes the content or the behavior of the web traffic and blocks or allows it based on a set of predefined rules or criteria, such as the type, the size, the origin, or the destination of the web traffic. Dynamic packet filtering is a type of filtering that examines the header and the payload of each packet and allows or denies it based on a set of predefined rules or criteria, as well as the state or the context of the packets or the connections. Dynamic packet filtering is more complex, slower, and stateful, meaning that it keeps track of the state or the context of the packets or the connections. References: CISSP CBK Reference, 5th Edition, Chapter 4, page 215; CISSP All-in-One Exam Guide, 8th Edition, Chapter 4, page 177
What BEST describes the confidentiality, integrity, availability triad?
A tool used to assist in understanding how to protect the organization's data
The three-step approach to determine the risk level of an organization
The implementation of security systems to protect the organization's data
A vulnerability assessment to see how well the organization's data is protected
The confidentiality, integrity, availability triad, or CIA triad, is a tool used to assist in understanding how to protect the organization’s data. The CIA triad is a model that defines the three fundamental and interrelated security objectives of information security, which are:
Which of the following access control models is MOST restrictive?
Discretionary Access Control (DAC)
Mandatory Access Control (MAC)
Role Based Access Control (RBAC)
Rule based access control
Access control models are frameworks that define how the access rights and permissions are granted and enforced for the subjects (such as users, processes, or devices) and the objects (such as files, folders, or databases) in a system or a network. The most restrictive access control model is Mandatory Access Control (MAC), which is a model that assigns a security label (such as a classification or a clearance level) to each subject and object, and allows access only if the subject’s security label matches or dominates the object’s security label. MAC is enforced by the system or the network, and cannot be modified by the subjects or the owners of the objects. MAC provides strong security and confidentiality for the objects, as it prevents unauthorized or unintended access by the subjects. Discretionary Access Control (DAC) is not the most restrictive access control model, as it is a model that allows the subjects or the owners of the objects to grant or revoke access rights and permissions to the objects, based on their discretion. DAC is enforced by the access control lists (ACLs) or the capabilities that are associated with the objects. DAC provides flexibility and convenience for the subjects and the owners of the objects, but it also increases the risk of unauthorized or unintended access by the subjects. Role Based Access Control (RBAC) is not the most restrictive access control model, as it is a model that assigns access rights and permissions to the subjects based on their roles or functions in the organization, rather than their identities or security labels. RBAC is enforced by the policies and rules that are defined by the organization. RBAC provides scalability and manageability for the subjects and the objects, as it simplifies the administration and the maintenance of the access rights and permissions. Rule based access control is not the most restrictive access control model, as it is a model that grants or denies access to the subjects based on a set of rules or conditions that are triggered by certain events or actions, such as the time, the location, or the frequency of the access. Rule based access control is enforced by the logic or the algorithm that is implemented by the system or the network. Rule based access control provides adaptability and dynamism for the subjects and the objects, as it allows the access rights and permissions to change according to the context or the situation. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 5: Identity and Access Management (IAM), page 211. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10: Identity and Access Management, page 591.
An attacker is able to remain indefinitely logged into a exploiting to remain on the web service?
Alert management
Password management
Session management
Identity management (IM)
Session management is the process of controlling and maintaining the state and information of a user’s interaction with a web service. It involves creating, maintaining, and terminating sessions, as well as ensuring their security and integrity. An attacker who is able to remain indefinitely logged into a web service is exploiting a weakness in session management, such as the lack of session expiration, session timeout, or session revocation. Alert management, password management, and identity management (IM) are all related to security, but they do not directly address the issue of session management.
Which of the following will an organization's network vulnerability testing process BEST enhance?
Firewall log review processes
Asset management procedures
Server hardening processes
Code review procedures
Network vulnerability testing is a process of identifying and assessing the security risks of a network. It can help an organization to enhance its server hardening processes, which are the measures taken to reduce the attack surface and improve the security posture of a server. Server hardening can include applying patches, disabling unnecessary services, configuring firewall rules, enforcing strong passwords, and implementing encryption. Firewall log review, asset management, and code review are also important security processes, but they are not directly enhanced by network vulnerability testing. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, p. 409-410; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 6: Security Assessment and Testing, p. 813-814.
The security team has been tasked with performing an interface test against a frontend external facing application and needs to verify that all input fields protect against
invalid input. Which of the following BEST assists this process?
Application fuzzing
Instruction set simulation
Regression testing
Sanity testing
The technique that can be used to verify that all input fields protect against invalid input is application fuzzing. Application fuzzing is a technique that involves the generation, injection, or submission of random, malformed, or unexpected data or input, to an application, system, or resource, to test or evaluate the behavior, response, or output, of the application, system, or resource, to the data or input, as well as to identify or detect any errors, bugs, or vulnerabilities, that may exist or occur in the application, system, or resource, due to the data or input. Application fuzzing can be used to verify that all input fields protect against invalid input, by providing various types or formats of data or input, such as strings, numbers, symbols, or commands, to the input fields of the application, system, or resource, and by observing or analyzing the results or effects of the data or input, such as crashes, exceptions, or anomalies, on the application, system, or resource. Application fuzzing can help to ensure the functionality, performance, or security of the application, system, or resource, by discovering, testing, or validating the input validation, sanitization, or filtering mechanisms or functions, that are implemented or applied to the application, system, or resource, to prevent, mitigate, or handle the invalid input. Instruction set simulation, regression testing, or sanity testing are not the techniques that can be used to verify that all input fields protect against invalid input, as they are either more related to the methods, techniques, or tools, that are used to emulate, verify, or check the functionality, performance, or compatibility of the application, system, or resource, rather than to test or evaluate the behavior, response, or output of the application, system, or resource, to the random, malformed, or unexpected data or input. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 552; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.14, page 306.
Why do certificate Authorities (CA) add value to the security of electronic commerce transactions?
They maintain the certificate revocation list.
They maintain the private keys of transition parties.
They verify the transaction parties' private keys.
They provide a secure communication enamel to the transaction parties.
A certificate authority (CA) is a trusted third party that issues and manages digital certificates for electronic commerce transactions. A digital certificate is a data structure that binds a public key to an identity, such as a person, organization, or device. A certificate revocation list (CRL) is a list of certificates that have been revoked by the CA before their expiration date, due to reasons such as compromise, loss, or theft. A CA adds value to the security of electronic commerce transactions by maintaining the CRL and distributing it to the transaction parties, so that they can verify the validity and authenticity of the certificates and avoid using revoked ones. This ensures that the transaction parties are who they claim to be and that their communication is encrypted and protected from unauthorized access or modification. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Cryptography and Symmetric Key Algorithms, Section: Digital Certificates; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Cryptography, Section: Certificate Revocation Lists.
A small office is running WiFi 4 APs, and neighboring offices do not want to increase the throughput to associated devices. Which of the following is the MOST cost-efficient way for the office to increase network performance?
Add another AP.
Disable the 2.4GHz radios
Enable channel bonding.
Upgrade to WiFi 5.
The most cost-efficient way for the office to increase network performance is to upgrade to WiFi 5, which is the latest generation of wireless technology that offers faster speeds, lower latency, and higher capacity than WiFi 4. WiFi 5 operates on both 2.4GHz and 5GHz bands, and supports features such as MU-MIMO, beamforming, and channel bonding, which can improve the throughput and efficiency of the wireless network. Upgrading to WiFi 5 may require replacing the existing APs and devices with compatible ones, but it may not be as expensive or complex as the other options. The other options are either ineffective or impractical for increasing network performance, as they may not address the root cause of the problem, may interfere with the neighboring offices, or may require additional hardware or configuration. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points
Which of the following is a characteristic of the independent testing of a program?
Independent testing increases the likelihood that a test will expose the effect of a hidden feature.
Independent testing decreases the likelihood that a test will expose the effect of a hidden feature.
Independent testing teams help decrease the cost of creating test data and system design specification.
Independent testing teams help identify functional requirements and Service Level Agreements (SLA)
Independent testing is a type of testing that is performed by a third-party or external entity that is not involved in the development or operation of the program. Independent testing has several advantages, such as reducing bias, increasing objectivity, and improving quality. One of the characteristics of independent testing is that it increases the likelihood that a test will expose the effect of a hidden feature. A hidden feature is a functionality or behavior of the program that is not documented or specified, and may be intentional or unintentional. Independent testing can reveal the effect of a hidden feature by using different test cases, techniques, or perspectives than the ones used by the developers or operators of the program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, page 1169; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.17, page 308.
In Federated Identity Management (FIM), which of the following represents the concept of federation?
Collection of information logically grouped into a single entity
Collection, maintenance, and deactivation of user objects and attributes in one or more systems, directories or applications
Collection of information for common identities in a system
Collection of domains that have established trust among themselves
The concept of federation in Federated Identity Management (FIM) is the collection of domains that have established trust among themselves. A domain is a logical or administrative boundary that defines the scope and authority of an identity provider (IdP) or a service provider (SP). An IdP is an entity that creates, maintains, and verifies the identities and attributes of the users. An SP is an entity that provides services or resources to the users, and relies on the IdP for the authentication and authorization of the users. A federation is a group of domains that have agreed to share and accept the identities and attributes of the users across the domains, based on a common set of policies, standards, and protocols. A federation enables the users to access multiple services or resources from different domains, using a single or federated identity, without having to create or manage multiple accounts or credentials. A federation also enhances the security, privacy, and convenience of the users and the domains, by reducing the identity management overhead and complexity, and by enabling the users to control the disclosure and use of their identity information . References: [CISSP CBK, Fifth Edition, Chapter 5, page 449]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 18].
Which is the BEST control to meet the Statement on Standards for Attestation Engagements 18 (SSAE-18) confidentiality category?
Data processing
Storage encryption
File hashing
Data retention policy
The best control to meet the Statement on Standards for Attestation Engagements 18 (SSAE-18) confidentiality category is storage encryption. SSAE-18 is a standard that defines the requirements and guidance for performing attestation engagements on service organizations, such as cloud providers, data centers, or payroll processors. SSAE-18 requires the service organizations to provide a report on the design and effectiveness of their controls over the security, availability, processing integrity, confidentiality, or privacy of the services they provide to their customers. The confidentiality category refers to the protection of the information that is designated as confidential by the service organization or its customers, and that is transmitted, stored, or processed by the service organization. Storage encryption is a control that encrypts the data at rest, such as in hard drives, databases, or backups, and that prevents unauthorized or malicious access, modification, or disclosure of the confidential information. Data processing, file hashing, or data retention policy are not the best controls to meet the SSAE-18 confidentiality category, as they are not directly related to the protection of the confidential information at rest. Data processing is a control that transforms or manipulates the data for a specific purpose, such as analysis, reporting, or validation. File hashing is a control that generates a unique and fixed-length value for a file, and that verifies the integrity or authenticity of the file. Data retention policy is a control that defines the rules and procedures for retaining, storing, or disposing of the data, and that complies with the legal, regulatory, or contractual obligations. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 71.
An organization that has achieved a Capability Maturity model Integration (CMMI) level of 4 has done which of the following?
Addressed continuous innovative process improvement
Addressed the causes of common process variance
Achieved optimized process performance
Achieved predictable process performance
An organization that has achieved a Capability Maturity Model Integration (CMMI) level of 4 has done the following: achieved predictable process performance. CMMI is a framework that provides a set of best practices and guidelines for improving the capability and maturity of the processes of an organization, such as software development, service delivery, or project management. CMMI consists of five levels, each of which represents a different stage or degree of process improvement, from initial to optimized. The five levels of CMMI are:
An organization that has achieved a CMMI level of 4 has done the following: achieved predictable process performance, meaning that the organization has established quantitative objectives and metrics for the processes, and has used statistical and analytical techniques to monitor and control the variation and performance of the processes, and to ensure that the processes meet the expected or desired outcomes. An organization that has achieved a CMMI level of 4 has not done the following: addressed continuous innovative process improvement, addressed the causes of common process variance, or achieved optimized process performance, as these are the characteristics or achievements of a CMMI level of 5, which is the highest and most mature level of CMMI. References:
Which of the following authorization standards is built to handle Application Programming Interface (API) access for Federated Identity Management (FIM)?
Security Assertion Markup Language (SAML)
Open Authentication (OAUTH)
Remote Authentication Dial-in User service (RADIUS)
Terminal Access Control Access Control System Plus (TACACS+)
The authorization standard that is built to handle Application Programming Interface (API) access for Federated Identity Management (FIM) is Open Authentication (OAuth). OAuth is a standard protocol that enables the delegation of authorization to access resources or services from one party to another, without sharing the credentials. OAuth can be used for FIM, which is a mechanism that allows the users to use a single identity across multiple domains or systems, such as social media platforms, cloud services, or web applications. OAuth can handle API access for FIM, which means that the users can authorize the applications to access their data or services from other providers, such as contacts, calendars, or photos, through the APIs. Security Assertion Markup Language (SAML), Remote Authentication Dial-in User Service (RADIUS), and Terminal Access Control Access Control System Plus (TACACS+) are not authorization standards that are built to handle API access for FIM, but they are standards or protocols that can be used or supported by FIM for authentication, authorization, or accounting purposes. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 656; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 439.
While classifying credit card data related to Payment Card Industry Data Security Standards (PCI-DSS), which of the following is a PRIMARY security requirement?
Processor agreements with card holders
Three-year retention of data
Encryption of data
Specific card disposal methodology
The primary security requirement for classifying credit card data related to Payment Card Industry Data Security Standards (PCI-DSS) is encryption of data. PCI-DSS is a set of standards and guidelines that define the security requirements and best practices for protecting the credit card data of the customers and the merchants. PCI-DSS applies to any organization that stores, processes, or transmits credit card data, such as banks, retailers, or service providers. Encryption of data is the primary security requirement for classifying credit card data related to PCI-DSS, as it can protect the confidentiality, integrity, and availability of the credit card data, and prevent unauthorized access, disclosure, modification, or loss of the credit card data. Encryption of data means using cryptographic algorithms and keys to transform the credit card data into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of data can be applied to the credit card data at rest, such as when it is stored in a database, file, or device, or to the credit card data in transit, such as when it is transmitted over a network, channel, or protocol. Processor agreements with card holders, three-year retention of data, and specific card disposal methodology are not primary security requirements for classifying credit card data related to PCI-DSS. These are some of the secondary or supplementary security requirements or practices that may be implemented to enhance the security of the credit card data, but they are not as essential or critical as encryption of data. Processor agreements with card holders are contracts or agreements that define the terms and conditions of the credit card processing services, such as the fees, charges, liabilities, or disputes, between the credit card processors and the card holders. Three-year retention of data is a policy or regulation that specifies the maximum period of time that the credit card data can be retained or stored by the organization, before it must be deleted or destroyed. Specific card disposal methodology is a procedure or technique that describes how to properly dispose or destroy the credit card data or the credit card itself, such as by shredding, wiping, or burning, to prevent any recovery or reuse of the credit card data. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8, Software Development Security, page 855. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 791.
Which of the following is the MOST secure password technique?
Passphrase
One-time password
Cognitive password
dphertext
The most secure password technique is the one-time password. A one-time password is a password that is valid for only one login session or transaction, and that is randomly generated or derived from a secret key or algorithm. A one-time password can help to enhance the security of the authentication process, by preventing the password from being reused, guessed, stolen, or intercepted by unauthorized parties. A one-time password can also help to protect against various password-based attacks, such as replay, brute force, or phishing attacks. A one-time password can be implemented by using various methods or techniques, such as hardware tokens, software tokens, SMS messages, or biometric factors. Passphrase, cognitive password, or ciphertext are not the most secure password techniques, as they are either less complex, less random, or less dynamic than the one-time password. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 295; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.10, page 221.
In order to support the least privilege security principle when a resource is transferring within the organization from a production support system administration role to a developer role, what changes should be made to the resource’s access to the production operating system (OS) directory structure?
From Read Only privileges to No Access Privileges
From Author privileges to Administrator privileges
From Administrator privileges to No Access privileges
From No Access Privileges to Author privileges
The change that should be made to the resource’s access to the production operating system (OS) directory structure to support the least privilege security principle when a resource is transferring within the organization from a production support system administration role to a developer role is from Administrator privileges to No Access privileges. The least privilege security principle is a security principle or practice that states that the access or use of a system, network, or resource, by the users, devices, or processes, should be limited or restricted to the minimum or necessary level or amount, that is required or sufficient to perform or complete their assigned or authorized tasks or functions. The least privilege security principle can help to ensure the security, efficiency, or performance of the system, network, or resource, as well as to protect the system, network, or resource from various security threats or risks, such as unauthorized access, data leakage, or privilege escalation. The least privilege security principle can be implemented or enforced by using various security mechanisms or functions, such as access control lists, role-based access control, or separation of duties, that can define, regulate, or restrict the access or use of the system, network, or resource, by the users, devices, or processes, based on a set of rules, policies, or criteria. The change that should be made to the resource’s access to the production operating system (OS) directory structure to support the least privilege security principle when a resource is transferring within the organization from a production support system administration role to a developer role is from Administrator privileges to No Access privileges. Administrator privileges are the highest or most powerful level or type of privileges or permissions that can be granted or assigned to a user, device, or process, that can allow or enable them to access or use the system, network, or resource, without any limitation, restriction, or supervision, as well as to perform or execute any action, operation, or function, on the system, network, or resource, such as creating, modifying, deleting, or configuring the system, network, or resource. No Access privileges are the lowest or least level or type of privileges or permissions that can be granted or assigned to a user, device, or process, that can deny or prevent them from accessing or using the system, network, or resource, as well as from performing or executing any action, operation, or function, on the system, network, or resource. The resource’s access to the production operating system (OS) directory structure should be changed from Administrator privileges to No Access privileges, to support the least privilege security principle, when the resource is transferring within the organization from a production support system administration role to a developer role, because the resource no longer needs or requires the Administrator privileges to perform or complete their new or current tasks or functions, as a developer, and the resource should not have any access or use of the production operating system (OS) directory structure, as a developer, to prevent or avoid any potential security problems or issues, such as data corruption, system malfunction, or configuration error, that may affect the production operating system (OS) directory structure. From Read Only privileges to No Access privileges, from Author privileges to Administrator privileges, or from No Access privileges to Author privileges are not the changes that should be made to the resource’s access to the production operating system (OS) directory structure to support the least privilege security principle when a resource is transferring within the organization from a production support system administration role to a developer role, as they are either more related to the other levels or types of privileges or permissions, such as Read Only privileges, which can allow or enable the user, device, or process to view or read the system, network, or resource, but not to modify or change the system, network, or resource, or Author privileges, which can allow or enable the user, device, or process to create or modify the system, network, or resource, but not to delete or configure the system, network, or resource, that may not be appropriate, necessary, or sufficient for the resource to perform or complete their new or current tasks or functions, as a developer, or to the other roles or positions, such as a production support system administrator, which may have different or distinct tasks or functions, responsibilities or accountabilities, or requirements or expectations, from a developer, that may affect or determine the level or type of privileges or permissions that should be granted or assigned to the resource to access or use the production operating system (OS) directory structure. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 281; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.13, page 223.
A security professional can BEST mitigate the risk of using a Commercial Off-The-Shelf (COTS) solution by deploying the application with which of the following controls in ?
Whitelisting application
Network segmentation
Hardened configuration
Blacklisting application
The best control to mitigate the risk of using a Commercial Off-The-Shelf (COTS) solution is to deploy the application with a hardened configuration. A COTS solution is a type of software or hardware product that is ready-made, standardized, and available for purchase from a vendor or a third party, without requiring any customization or modification by the customer or the user. A COTS solution can pose a security risk, as it may contain security vulnerabilities or issues that can be exploited by the attackers, or that may not comply with the security policies and standards of the organization. A hardened configuration is a type of security control that involves applying the security best practices and guidelines to configure the COTS solution, such as disabling or removing the unnecessary features, functions, or services, updating or patching the software or firmware, or enforcing the security settings or parameters. A hardened configuration can mitigate the risk of using a COTS solution, by reducing the attack surface and exposure of the COTS solution, and by enhancing the security, performance, and reliability of the COTS solution . References: [CISSP CBK, Fifth Edition, Chapter 3, page 239]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 15].
Which of the following security objectives for industrial control systems (ICS) can be adapted to securing any Internet of Things (IoT) system?
Prevent unauthorized modification of data.
Restore the system after an incident.
Detect security events and incidents.
Protect individual components from exploitation
One of the security objectives for industrial control systems (ICS) that can be adapted to securing any Internet of Things (IoT) system is to protect individual components from exploitation. ICS are systems that monitor and control physical processes or devices, such as power plants, water treatment facilities, or manufacturing plants. IoT are systems that connect physical objects or devices, such as sensors, cameras, or smart appliances, to the internet or a network. Both ICS and IoT systems consist of multiple components, such as hardware, software, firmware, or communication protocols, that interact with each other and the environment. Protecting individual components from exploitation is a security objective that applies to both ICS and IoT systems, as it aims to prevent unauthorized or malicious access, modification, or disruption of the components, and to ensure their integrity, availability, and functionality. Preventing unauthorized modification of data, restoring the system after an incident, or detecting security events and incidents are not security objectives that are specific to ICS or IoT systems, but rather general security objectives that apply to any system or organization. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 194.
What does the term “100-year floodplain” mean to emergency preparedness officials?
The area is expected to be safe from flooding for at least 100 years.
The odds of a flood at this level are 1 in 100 in any given year.
The odds are that the next significant flood will hit within the next 100 years.
The last flood of any kind to hit the area was more than 100 years ago.
The term “100-year floodplain” means to emergency preparedness officials that the odds of a flood at this level are 1 in 100 in any given year. A floodplain is an area of land that is adjacent to a river, a lake, or an ocean, and that is prone to flooding when the water level rises due to heavy rainfall, snowmelt, or storm surge. A floodplain can have different levels of flood risk, depending on the frequency and severity of the flooding events. A 100-year floodplain is a floodplain that has a 1% chance of being flooded by a flood that has a magnitude or intensity that is expected to occur once in 100 years, or that has a return period of 100 years. A 100-year floodplain does not mean that the area is expected to be safe from flooding for at least 100 years, or that the next significant flood will hit within the next 100 years, or that the last flood of any kind to hit the area was more than 100 years ago, as these are common misconceptions or misunderstandings of the term. A 100-year floodplain means that the area has a constant and annual probability of being flooded by a 100-year flood, regardless of the past or future flooding events. Emergency preparedness officials are the authorities or the professionals that are responsible for planning, organizing, coordinating, or executing the emergency response and recovery activities in the event of a disaster, such as a flood. Emergency preparedness officials need to understand the term “100-year floodplain” and its implications, as it can help them assess the flood risk and vulnerability of the area, and develop and implement the appropriate mitigation, preparedness, response, and recovery measures for the area. References:
The Rivest-Shamir-Adleman (RSA) algorithm is BEST suited for which of the following operations?
Bulk data encryption and decryption
One-way secure hashing for user and message authentication
Secure key exchange for symmetric cryptography
Creating digital checksums for message integrity
A security professional has been requested by the Board of Directors and Chief Information Security Officer (CISO) to perform an internal and external penetration test. A penetration test is a type of security assessment that simulates a real-world attack on a system or a network, to identify and exploit the vulnerabilities or weaknesses that may compromise the security. An internal penetration test is performed from within the system or the network, to assess the security from the perspective of an authorized user or an insider. An external penetration test is performed from outside the system or the network, to assess the security from the perspective of an unauthorized user or an outsider. The best course of action for the security professional is to review corporate security policies and procedures, before performing the penetration test. The corporate security policies and procedures are the documents that define the security goals, objectives, standards, and guidelines of the organization, and that specify the roles, responsibilities, and expectations of the security personnel and the stakeholders. The review of the corporate security policies and procedures will help the security professional to understand the scope, objectives, and methodology of the penetration test, and to ensure that the penetration test is aligned with the organization’s security requirements and compliance. The review of the corporate security policies and procedures will also help the security professional to obtain the necessary authorization, approval, and consent from the organization and the stakeholders, to perform the penetration test legally and ethically. Reviewing data localization requirements and regulations is not the best course of action for the security professional, as it is the process of identifying and complying with the laws and regulations that govern the collection, storage, and processing of the data in different jurisdictions. Reviewing data localization requirements and regulations is important for the security professional, but it is not the first step before performing the penetration test. Reviewing data localization requirements and regulations is more relevant for the data protection and privacy aspects of the security, not for the penetration testing aspects of the security. With notice to the Configuring a Wireless Access Point (WAP) with the same Service Set Identifier external test is not a valid option, as it is not a coherent or meaningful sentence. Configuring a Wireless Access Point (WAP) with the same Service Set Identifier (SSID) is a process of setting up a wireless network device with a network name, to allow wireless devices to connect to the network. This has nothing to do with performing a penetration test, or with giving notice to the organization or the stakeholders. With notice to the organization, perform an external penetration test first, then an internal test is not the best course of action for the security professional, as it is not the first step before performing the penetration test. Giving notice to the organization is important for the security professional, as it informs the organization and the stakeholders about the purpose, scope, and timing of the penetration test, and it also helps to avoid any confusion, disruption, or conflict with the normal operations of the system or the network. However, giving notice to the organization is not the first step before performing the penetration test, as the security professional should first review the corporate security policies and procedures, and obtain the necessary authorization, approval, and consent from the organization and the stakeholders. Performing an external penetration test first, then an internal test is not the best course of action for the security professional, as it is not the first step before performing the penetration test. Performing an external penetration test first, then an internal test is a possible way of conducting the penetration test, but it is not the only way. The order and the method of performing the penetration test may vary depending on the objectives, scope, and methodology of the penetration test, and the security professional should follow the corporate security policies and procedures, and the best practices and standards of the penetration testing industry. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 291. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 353.
A security professional should ensure that clients support which secondary algorithm for digital signatures when a Secure Multipurpose Internet Mail Extension (S/MIME) is used?
Triple Data Encryption Standard (3DES)
Advanced Encryption Standard (AES)
Digital Signature Algorithm (DSA)
Rivest-Shamir-Adieman (RSA)
S/MIME is a standard for secure email that uses asymmetric encryption and digital signatures. S/MIME supports several algorithms for digital signatures, but the most common ones are RSA and DSA. RSA is a more versatile algorithm that can be used for both encryption and digital signatures, while DSA is designed only for digital signatures. RSA is also more widely supported by email clients and servers than DSA. Therefore, a security professional should ensure that clients support RSA as a secondary algorithm for digital signatures when S/MIME is used, in case the primary algorithm is not available or compatible. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Cryptography and Symmetric Key Algorithms, page 245. CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 3: Security Architecture and Engineering, Question 13.
A security professional should consider the protection of which of the following elements FIRST when developing a defense-in-depth strategy for a mobile workforce?
Network perimeters
Demilitarized Zones (DM2)
Databases and back-end servers
End-user devices
Defense-in-depth is a security strategy that employs multiple layers of security controls and mechanisms to protect the system or network from various types of attacks. A mobile workforce is a group of employees or users who work remotely or outside the organization’s physical premises, using mobile devices such as laptops, tablets, or smartphones. A security professional should consider the protection of the end-user devices first when developing a defense-in-depth strategy for a mobile workforce, as these devices are the most vulnerable and exposed to various threats, such as theft, loss, malware, phishing, or unauthorized access. The end-user devices should be protected by security controls and mechanisms such as encryption, authentication, authorization, antivirus, firewall, VPN, device management, backup, and recovery. Network perimeters, demilitarized zones, and databases and back-end servers are also important elements to protect in a defense-in-depth strategy, but they are not the first priority for a mobile workforce, as they are more related to the organization’s internal network and infrastructure. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Secure Network Architecture and Securing Network Components, page 343; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 4: Communication and Network Security, Question 4.5, page 184.
Which of the following is the top barrier for companies to adopt cloud technology?
Migration period
Data integrity
Cost
Security
The top barrier for companies to adopt cloud technology is security. Cloud technology is a technology that enables the delivery or consumption of computing resources or services over the internet, such as servers, storage, databases, networks, applications, or analytics. Cloud technology can offer many benefits to companies, such as cost reduction, scalability, flexibility, or efficiency. However, cloud technology also poses many challenges or risks to companies, such as security, compliance, performance, or reliability. Security is the top barrier for companies to adopt cloud technology, as it is the most critical and complex issue that companies face when moving to or using the cloud. Security is the barrier that prevents or hinders the companies from adopting cloud technology, as it involves the protection of the data, the systems, and the users from unauthorized or malicious access, modification, or disruption, and the compliance with the legal, regulatory, or contractual obligations. Security is the barrier that requires or demands the most attention, effort, or resources from the companies when adopting cloud technology, as it involves the assessment, evaluation, or verification of the security posture, capabilities, or controls of the cloud provider, the cloud service, and the cloud customer, and the implementation, management, or monitoring of the security policies, procedures, or measures for the cloud environment. Migration period, data integrity, or cost are not the top barriers for companies to adopt cloud technology, as they are not the most critical or complex issues that companies face when moving to or using the cloud. Migration period is the time or the duration that it takes for the companies to transfer or migrate their data, systems, or applications from their on-premises or legacy environment to the cloud environment. Data integrity is the quality or the condition of the data that ensures that the data is accurate, complete, or consistent, and that the data is not corrupted, altered, or lost. Cost is the amount or the value of the money or the resources that the companies spend or invest in adopting or using the cloud technology. Migration period, data integrity, or cost are important or relevant issues that companies face when adopting cloud technology, but they are not the top barriers, as they are not the most critical or complex issues, and they can be addressed or resolved by using proper planning, testing, or optimization techniques or methods. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 287.
When reviewing vendor certifications for handling and processing of company data, which of the following is the BEST Service Organization Controls (SOC) certification for the vendor to possess?
SOC 1 Type 1
SOC 2 Type 1
SOC 2 Type 2
SOC 3
The best Service Organization Controls (SOC) certification for the vendor to possess for handling and processing of company data is SOC 2 Type 2. SOC is a framework that defines the standards and criteria for the reporting and auditing of the internal controls and processes of a service organization, such as a cloud service provider, that affect the security, availability, processing integrity, confidentiality, or privacy of the information and systems of the user entities, such as the customers or clients of the service organization. SOC 2 Type 2 is a certification that indicates that the service organization has undergone an independent audit that evaluates and verifies the design and operating effectiveness of the internal controls and processes of the service organization, based on the Trust Services Criteria (TSC) of security, availability, processing integrity, confidentiality, or privacy, over a period of time, usually six to twelve months. SOC 2 Type 2 is the best SOC certification for the vendor to possess for handling and processing of company data, as it can provide the highest level of assurance and transparency for the security, reliability, and compliance of the vendor’s services . References: [CISSP CBK, Fifth Edition, Chapter 2, page 90]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 20].
According to the (ISC)? ethics canon “act honorably, honestly, justly, responsibly, and legally," which order should be used when resolving conflicts?
Public safety and duties to principals, individuals, and the profession
Individuals, the profession, and public safety and duties to principals
Individuals, public safety and duties to principals, and the profession
The profession, public safety and duties to principals, and individuals
According to the (ISC)2 ethics canon “act honorably, honestly, justly, responsibly, and legally," the order that should be used when resolving conflicts is public safety and duties to principals, individuals, and the profession. The (ISC)2 ethics canon is a set of ethical principles and guidelines that govern the professional and personal conduct of the (ISC)2 members and certification holders. The (ISC)2 ethics canon states that the (ISC)2 members and certification holders should act honorably, honestly, justly, responsibly, and legally, and that they should advance and protect the profession. The (ISC)2 ethics canon also provides a hierarchy of obligations that the (ISC)2 members and certification holders should follow when resolving conflicts or dilemmas that may arise from their professional or personal activities. The hierarchy of obligations is as follows: public safety and duties to principals, individuals, and the profession. Public safety is the highest obligation, and it refers to the protection of the health, welfare, and security of the general public from any harm or danger. Duties to principals are the second highest obligation, and they refer to the loyalty, fidelity, and honesty that the (ISC)2 members and certification holders owe to their employers, clients, or customers. Individuals are the third highest obligation, and they refer to the respect, dignity, and privacy that the (ISC)2 members and certification holders should show to other people, such as colleagues, peers, or users. The profession is the lowest obligation, and it refers to the advancement and protection of the information security profession and its reputation, standards, and ethics. The other options are not the correct order of the hierarchy of obligations according to the (ISC)2 ethics canon. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 35.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
Which of the following MUST be done when promoting a security awareness program to senior management?
Show the need for security; identify the message and the audience
Ensure that the security presentation is designed to be all-inclusive
Notify them that their compliance is mandatory
Explain how hackers have enhanced information security
The most important thing to do when promoting a security awareness program to senior management is to show the need for security; identify the message and the audience. This means that you should demonstrate how security awareness can benefit the organization, reduce risks, and align with the business goals. You should also tailor your message and your audience according to the specific security issues and challenges that your organization faces. Ensuring that the security presentation is designed to be all-inclusive, notifying them that their compliance is mandatory, or explaining how hackers have enhanced information security are not the most effective ways to promote a security awareness program, as they may not address the specific needs, interests, or concerns of senior management. References: 9: Seven Keys to Success for a More Mature Security Awareness Program1011: 6 Metrics to Track in Your Cybersecurity Awareness Training Campaign
Which of the following does Temporal Key Integrity Protocol (TKIP) support?
Multicast and broadcast messages
Coordination of IEEE 802.11 protocols
Wired Equivalent Privacy (WEP) systems
Synchronization of multiple devices
Temporal Key Integrity Protocol (TKIP) supports multicast and broadcast messages by using a group temporal key that is shared by all the devices in the same wireless network. This key is used to encrypt and decrypt the messages that are sent to multiple recipients at once. TKIP also supports unicast messages by using a pairwise temporal key that is unique for each device and session. TKIP does not support coordination of IEEE 802.11 protocols, as it is a protocol itself that was designed to replace WEP. TKIP is compatible with WEP systems, but it does not support them, as it provides more security features than WEP. TKIP does not support synchronization of multiple devices, as it does not provide any clock or time synchronization mechanism . References: 1: Temporal Key Integrity Protocol - Wikipedia 2: Wi-Fi Security: Should You Use WPA2-AES, WPA2-TKIP, or Both? - How-To Geek
What is the FIRST step in developing a security test and its evaluation?
Determine testing methods
Develop testing procedures
Identify all applicable security requirements
Identify people, processes, and products not in compliance
The first step in developing a security test and its evaluation is to identify all applicable security requirements. Security requirements are the specifications or criteria that define the security objectives, expectations, and needs of the system or network. Security requirements may be derived from various sources, such as business goals, user needs, regulatory standards, contractual obligations, or best practices. Identifying all applicable security requirements is essential to establish the scope, purpose, and criteria of the security test and its evaluation. Determining testing methods, developing testing procedures, and identifying people, processes, and products not in compliance are subsequent steps that should be done after identifying the security requirements, as they depend on the security requirements to be defined and agreed upon. References: : Security Testing - Overview : Security Testing - Planning
What is the MOST effective countermeasure to a malicious code attack against a mobile system?
Sandbox
Change control
Memory management
Public-Key Infrastructure (PKI)
A sandbox is a security mechanism that isolates a potentially malicious code or application from the rest of the system, preventing it from accessing or modifying any sensitive data or resources1. A sandbox can be implemented at the operating system, application, or network level, and can provide a safe environment for testing, debugging, or executing untrusted code. A sandbox is the most effective countermeasure to a malicious code attack against a mobile system, as it can prevent the code from spreading, stealing, or destroying any information on the device. Change control, memory management, and PKI are not directly related to preventing or mitigating malicious code attacks on mobile systems. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 507.
When constructing an Information Protection Policy (IPP), it is important that the stated rules are necessary, adequate, and
flexible.
confidential.
focused.
achievable.
An Information Protection Policy (IPP) is a document that defines the objectives, scope, roles, responsibilities, and rules for protecting the information assets of an organization. An IPP should be aligned with the business goals and legal requirements, and should be communicated and enforced throughout the organization. When constructing an IPP, it is important that the stated rules are necessary, adequate, and achievable, meaning that they are relevant, sufficient, and realistic for the organization’s context and capabilities34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 234: CISSP For Dummies, 7th Edition, Chapter 1, page 15.
Which of the following is the MAIN reason that system re-certification and re-accreditation are needed?
To assist data owners in making future sensitivity and criticality determinations
To assure the software development team that all security issues have been addressed
To verify that security protection remains acceptable to the organizational security policy
To help the security team accept or reject new systems for implementation and production
The main reason that system re-certification and re-accreditation are needed is to verify that the security protection of the system remains acceptable to the organizational security policy, especially after significant changes or updates to the system. Re-certification is the process of reviewing and testing the security controls of the system to ensure that they are still effective and compliant with the security policy. Re-accreditation is the process of authorizing the system to operate based on the results of the re-certification. The other options are not the main reason for system re-certification and re-accreditation, as they either do not relate to the security protection of the system (A and D), or do not involve re-certification and re-accreditation (B). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 695.
In a financial institution, who has the responsibility for assigning the classification to a piece of information?
Chief Financial Officer (CFO)
Chief Information Security Officer (CISO)
Originator or nominated owner of the information
Department head responsible for ensuring the protection of the information
In a financial institution, the responsibility for assigning the classification to a piece of information belongs to the originator or nominated owner of the information. The originator is the person who creates or generates the information, and the nominated owner is the person who is assigned the accountability and authority for the information by the management. The originator or nominated owner is the best person to determine the value and sensitivity of the information, and to assign the appropriate classification level based on the criteria and guidelines established by the organization. The originator or nominated owner is also responsible for reviewing and updating the classification as needed, and for ensuring that the information is handled and protected according to its classification56. References: 5: Information Classification Policy76: Information Classification and Handling Policy
Why must all users be positively identified prior to using multi-user computers?
To provide access to system privileges
To provide access to the operating system
To ensure that unauthorized persons cannot access the computers
To ensure that management knows what users are currently logged on
The main reason why all users must be positively identified prior to using multi-user computers is to ensure that unauthorized persons cannot access the computers. Positive identification is the process of verifying the identity of a user or a device before granting access to a system or a resource2. Positive identification can be achieved by using one or more factors of authentication, such as something the user knows, has, or is. Positive identification can enhance the security and accountability of the system, and prevent unauthorized or malicious access. Providing access to system privileges, providing access to the operating system, and ensuring that management knows what users are currently logged on are not the primary reasons why all users must be positively identified prior to using multi-user computers, as they are more related to the functionality or administration of the system, rather than the security. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 89.
At a MINIMUM, a formal review of any Disaster Recovery Plan (DRP) should be conducted
monthly.
quarterly.
annually.
bi-annually.
A formal review of any Disaster Recovery Plan (DRP) should be conducted at a minimum annually, or more frequently if there are significant changes in the business environment, the IT infrastructure, the security threats, or the regulatory requirements. A formal review involves evaluating the DRP against the current business needs, objectives, and risks, and ensuring that the DRP is updated, accurate, complete, and consistent. A formal review also involves testing the DRP to verify its effectiveness and feasibility, and identifying any gaps or weaknesses that need to be addressed12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10352: CISSP For Dummies, 7th Edition, Chapter 10, page 351.
Which one of the following transmission media is MOST effective in preventing data interception?
Microwave
Twisted-pair
Fiber optic
Coaxial cable
Fiber optic is the most effective transmission media in preventing data interception, as it uses light signals to transmit data over thin glass or plastic fibers1. Fiber optic cables are immune to electromagnetic interference, which means that they cannot be tapped or eavesdropped by external devices or signals. Fiber optic cables also have a low attenuation rate, which means that they can transmit data over long distances without losing much signal strength or quality. Microwave, twisted-pair, and coaxial cable are less effective transmission media in preventing data interception, as they use electromagnetic waves or electrical signals to transmit data over metal wires or air2. These media are susceptible to interference, noise, or tapping, which can compromise the confidentiality or integrity of the data. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 4062: CISSP For Dummies, 7th Edition, Chapter 4, page 85.
What should happen when an emergency change to a system must be performed?
The change must be given priority at the next meeting of the change control board.
Testing and approvals must be performed quickly.
The change must be performed immediately and then submitted to the change board.
The change is performed and a notation is made in the system log.
In cases of emergency changes, the priority is to address the issue at hand immediately to prevent any potential impacts on the system or organization. After implementing the change, it should then be documented and submitted to the change control board for review and approval post-implementation. References: CISSP Official (ISC)2 Practice Tests, Chapter 7, page 187; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 346
In general, servers that are facing the Internet should be placed in a demilitarized zone (DMZ). What is MAIN purpose of the DMZ?
Reduced risk to internal systems.
Prepare the server for potential attacks.
Mitigate the risk associated with the exposed server.
Bypass the need for a firewall.
According to the CISSP CBK Official Study Guide1, the main purpose of the demilitarized zone (DMZ) is to reduce the risk to internal systems. A DMZ is a network segment that is located between the external network (such as the Internet) and the internal network (such as the intranet) of an organization, and that contains servers or systems that are exposed or accessible to the external network, such as web servers, email servers, or DNS servers. A DMZ is used to isolate and protect the internal network from the external network, as it acts as a buffer or a barrier that prevents or limits the direct communication or connection between the external and the internal networks. A DMZ also helps to filter and monitor the traffic or the data that passes through the DMZ, as it is usually controlled and secured by firewalls, routers, or other security devices or mechanisms. By placing the servers or systems that are facing the Internet in the DMZ, the organization can reduce the risk to the internal systems, as it can prevent or mitigate the potential attacks or threats that may originate from the external network, such as the denial-of-service, the malware, or the hacking attacks. Prepare the server for potential attacks is not the main purpose of the DMZ, although it may be a benefit or a consequence of the DMZ. Preparing the server for potential attacks is the process of hardening or strengthening the server or the system that is facing the Internet, by applying the appropriate security measures or controls, such as the encryption, the authentication, or the patching of the server or the system. Preparing the server for potential attacks helps to increase the resilience or the resistance of the server or the system against the attacks or threats that may come from the external network, such as the denial-of-service, the malware, or the hacking attacks. Preparing the server for potential attacks may be a benefit or a consequence of the DMZ, as the DMZ may provide an additional layer of security or protection for the server or the system, as well as an opportunity or a platform for testing or evaluating the security or the performance of the server or the system. However, preparing the server for potential attacks is not the main purpose of the DMZ, as it is not the primary reason or the objective for creating or implementing the DMZ. Mitigate the risk associated with the exposed server is not the main purpose of the DMZ, although it may be a benefit or a consequence of the DMZ. Mitigating the risk associated with the exposed server is the process of reducing or minimizing the impact or the consequence of the attacks or threats that may target or affect the server or the system that is facing the Internet, by applying the appropriate security measures or controls, such as the backup, the recovery, or the contingency of the server or the system. Mitigating the risk associated with the exposed server helps to ensure the availability or the continuity of the server or the system, as well as the services or the functions that are provided or supported by the server or the system, in the event of the attacks or threats that may come from the external network, such as the denial-of-service, the malware, or the hacking attacks. Mitigating the risk associated with the exposed server may be a benefit or a consequence of the DMZ, as the DMZ may provide an additional layer of security or protection for the server or the system, as well as an opportunity or a platform for restoring or recovering the server or the system. However, mitigating the risk associated with the exposed server is not the main purpose of the DMZ, as it is not the primary reason or the objective for creating or implementing the DMZ. Bypass the need for a firewall is not the main purpose of the DMZ, in fact, it is the opposite or the contrary of the main purpose of the DMZ. Bypassing the need for a firewall is the process of avoiding or eliminating the use or the implementation of a firewall, which is a security device or mechanism that controls and secures the traffic or the data that passes through the network, by applying the rules or the policies that grant or deny the access or the communication between the networks, such as the external and the internal networks. Bypassing the need for a firewall may expose or compromise the security or the integrity of the network, as well as the servers or the systems that are connected or accessible to the network, as it may allow or enable the unauthorized or unintended traffic or data to enter or exit the network, which may lead to the attacks or threats that may come from the external network, such as the denial-of-service, the malware, or the hacking attacks. Bypassing the need for a firewall is not the main purpose of the DMZ, in fact, it is the opposite or the contrary of the main purpose of the DMZ, as the DMZ is usually controlled and secured by firewalls, routers, or other security devices or mechanisms, which are essential or integral components or elements of the DMZ. The DMZ does not bypass the need for a firewall, rather, it relies on or utilizes the firewall to achieve its main purpose, which is to reduce the risk to the internal systems.
When writing security assessment procedures, what is the MAIN purpose of the test outputs and reports?
To force the software to fail and document the process
To find areas of compromise in confidentiality and integrity
To allow for objective pass or fail decisions
To identify malware or hidden code within the test results
According to the CISSP Official (ISC)2 Practice Tests3, the main purpose of the test outputs and reports when writing security assessment procedures is to find areas of compromise in confidentiality and integrity. Security assessment is the process of evaluating the security posture and effectiveness of a system, network, or application, by identifying and measuring the vulnerabilities, threats, and risks that may affect its security objectives. Security assessment procedures are the steps and methods that define how the security assessment will be conducted, such as the scope, the tools, the techniques, the criteria, and the deliverables. The test outputs and reports are the results and documentation of the security assessment, which provide the evidence and analysis of the security issues and findings. The main purpose of the test outputs and reports is to find areas of compromise in confidentiality and integrity, which are two of the core security principles that aim to protect the data and the system from unauthorized access, disclosure, modification, or destruction. The test outputs and reports may also help to find areas of compromise in availability, accountability, authenticity, or non-repudiation, which are other security principles that may be relevant for the system under assessment. The test outputs and reports are not meant to force the software to fail and document the process, although this may be a side effect of some security testing techniques, such as penetration testing or fuzz testing. The test outputs and reports are not meant to allow for objective pass or fail decisions, although they may provide some recommendations or suggestions for improving the security posture and mitigating the risks. The test outputs and reports are not meant to identify malware or hidden code within the test results, although they may detect some signs or indicators of malicious or unauthorized activities or components.
The PRIMARY characteristic of a Distributed Denial of Service (DDoS) attack is that it
exploits weak authentication to penetrate networks.
can be detected with signature analysis.
looks like normal network activity.
is commonly confused with viruses or worms.
The primary characteristic of a Distributed Denial of Service (DDoS) attack is that it looks like normal network activity. A DDoS attack is a type of attack or a threat that aims or intends to disrupt or to degrade the availability or the performance of a system or a service, by overwhelming or flooding the system or the service with a large amount or a high volume of traffic or requests, from multiple or distributed sources or locations, such as the compromised or infected computers, devices, or networks, that are controlled or coordinated by the attacker or the malicious actor. The primary characteristic of a DDoS attack is that it looks like normal network activity, which means that it is difficult or challenging to detect or to prevent the DDoS attack, as it is hard or impossible to distinguish or to differentiate the legitimate or the authentic traffic or requests from the illegitimate or the malicious traffic or requests, and as it is hard or impossible to block or to filter the illegitimate or the malicious traffic or requests, without affecting or impacting the legitimate or the authentic traffic or requests. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 115; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 172
A security architect plans to reference a Mandatory Access Control (MAC) model for implementation. This indicates that which of the following properties are being prioritized?
Confidentiality
Integrity
Availability
Accessibility
According to the CISSP Official (ISC)2 Practice Tests, the property that is prioritized by a Mandatory Access Control (MAC) model for implementation is confidentiality. Confidentiality is the property that ensures that the data or information is only accessible or disclosed to the authorized parties, and is protected from unauthorized or unintended access or disclosure. A MAC model is a type of access control model that grants or denies access to an object based on the security labels of the subject and the object, and the security policy enforced by the system. A security label is a tag or a marker that indicates the classification, sensitivity, or clearance of the subject or the object, such as top secret, secret, or confidential. A security policy is a set of rules or criteria that defines how the access decisions are made based on the security labels, such as the Bell-LaPadula model or the Biba model. A MAC model prioritizes confidentiality, as it ensures that the data or information is only accessible or disclosed to the subjects that have the appropriate security labels and clearance, and that the data or information is not leaked or compromised by the subjects that have lower security labels or clearance. Integrity is not the property that is prioritized by a MAC model for implementation, although it may be a property that is supported or enhanced by a MAC model. Integrity is the property that ensures that the data or information is accurate, complete, and consistent, and is protected from unauthorized or unintended modification or corruption. A MAC model may support or enhance integrity, as it ensures that the data or information is only modified or corrupted by the subjects that have the appropriate security labels and clearance, and that the data or information is not altered or damaged by the subjects that have lower security labels or clearance. However, a MAC model does not prioritize integrity, as it does not prevent or detect the modification or corruption of the data or information by the subjects that have the same or higher security labels or clearance, or by the external factors or events, such as errors, failures, or accidents. Availability is not the property that is prioritized by a MAC model for implementation, although it may be a property that is supported or enhanced by a MAC model. Availability is the property that ensures that the data or information is accessible and usable by the authorized parties, and is protected from unauthorized or unintended denial or disruption of access or use. A MAC model may support or enhance availability, as it ensures that the data or information is accessible and usable by the subjects that have the appropriate security labels and clearance, and that the data or information is not denied or disrupted by the subjects that have lower security labels or clearance. However, a MAC model does not prioritize availability, as it does not prevent or detect the denial or disruption of access or use of the data or information by the subjects that have the same or higher security labels or clearance, or by the external factors or events, such as attacks, failures, or disasters. Accessibility is not the property that is prioritized by a MAC model for implementation, as it is not a security property, but a usability property. Accessibility is the property that ensures that the data or information is accessible and usable by the users with different abilities, needs, or preferences, such as the users with disabilities, impairments, or limitations. Accessibility is not a security property, as it does not protect the data or information from unauthorized or unintended access, disclosure, modification, corruption, denial, or disruption. Accessibility is a usability property, as it enhances the user experience and satisfaction of the data or information.
The application of a security patch to a product previously validate at Common Criteria (CC) Evaluation Assurance Level (EAL) 4 would
require an update of the Protection Profile (PP).
require recertification.
retain its current EAL rating.
reduce the product to EAL 3.
Common Criteria (CC) is an international standard for evaluating the security of IT products and systems. Evaluation Assurance Level (EAL) is a numerical grade that indicates the level of assurance and rigor of the evaluation process. EAL ranges from 1 (lowest) to 7 (highest). A product that has been validated at EAL 4 has been methodically designed, tested, and reviewed, and provides a moderate level of independently assured security. The application of a security patch to a product previously validated at EAL 4 would require recertification, as the patch may introduce new vulnerabilities or affect the security functionality of the product. The recertification process would ensure that the patched product still meets the EAL 4 requirements and does not compromise the security claims of the original evaluation. Updating the Protection Profile (PP), retaining the current EAL rating, or reducing the product to EAL 3 are not valid options, as they do not reflect the impact of the security patch on the product’s security assurance.
An organization’s information security strategic plan MUST be reviewed
whenever there are significant changes to a major application.
quarterly, when the organization’s strategic plan is updated.
whenever there are major changes to the business.
every three years, when the organization’s strategic plan is updated.
An organization’s information security strategic plan must be reviewed whenever there are major changes to the business, such as mergers, acquisitions, divestitures, new products or services, new markets, new regulations, or new threats. These changes can affect the organization’s risk profile, security objectives, policies, procedures, and controls. Therefore, the information security strategic plan must be updated to align with the current business environment and ensure the protection of the organization’s information assets124. References:
Which of the following questions can be answered using user and group entitlement reporting?
When a particular file was last accessed by a user
Change control activities for a particular group of users
The number of failed login attempts for a particular user
Where does a particular user have access within the network
User and group entitlement reporting is a process of collecting and analyzing the access rights and permissions of users and groups across the network. It can help answer questions such as where does a particular user have access within the network, what resources are accessible by a particular group, and who has access to a particular resource. User and group entitlement reporting can also help identify and remediate excessive or inappropriate access rights, enforce the principle of least privilege, and comply with security policies and regulations.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 138; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 114
In which identity management process is the subject’s identity established?
Trust
Provisioning
Authorization
Enrollment
According to the CISSP CBK Official Study Guide1, the identity management process in which the subject’s identity is established is enrollment. Enrollment is the process of registering or enrolling a subject into an identity management system, such as a user into an authentication system, or a device into a network. Enrollment is the process in which the subject’s identity is established, as it involves verifying and validating the subject’s identity, as well as collecting and storing the subject’s identity attributes, such as the name, email, or biometrics of the subject. Enrollment also involves issuing and assigning the subject’s identity credentials, such as the username, password, or certificate of the subject. Enrollment helps to create and maintain the subject’s identity record or profile, as well as to enable and facilitate the subject’s access and use of the system or network. Trust is not the identity management process in which the subject’s identity is established, although it may be a factor that influences the enrollment process. Trust is the degree of confidence or assurance that a subject or an entity has in another subject or entity, such as a user in a system, or a system in a network. Trust may influence the enrollment process, as it may determine the level or extent of the identity verification and validation, as well as the identity attributes and credentials that are required or provided for the enrollment process. Provisioning is not the identity management process in which the subject’s identity is established, although it may be a process that follows or depends on the enrollment process. Provisioning is the process of creating, assigning, and configuring a subject’s account or resource with the necessary access rights and permissions to perform the tasks and functions that are required by the subject’s role and responsibilities, as well as the security policies and standards of the system or network. Provisioning is not the process in which the subject’s identity is established, as it does not involve verifying and validating the subject’s identity, or collecting and storing the subject’s identity attributes or credentials. Authorization is not the identity management process in which the subject’s identity is established, although it may be a process that follows or depends on the enrollment process. Authorization is the process of granting or denying a subject’s access or use of an object or a resource, based on the subject’s identity, role, or credentials, as well as the security policies and rules of the system or network. Authorization is not the process in which the subject’s identity is established, as it does not involve verifying and validating the subject’s identity, or collecting and storing the subject’s identity attributes or credentials. References: 1
Which of the following is the MOST important element of change management documentation?
List of components involved
Number of changes being made
Business case justification
A stakeholder communication
The business case justification is crucial as it outlines the reasons for making the change, including the benefits, costs, and impacts on the organization. It helps stakeholders understand why the change is necessary and aids in gaining their support34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 457; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 868.
What type of test assesses a Disaster Recovery (DR) plan using realistic disaster scenarios while maintaining minimal impact to business operations?
Parallel
Walkthrough
Simulation
Tabletop
The type of test that assesses a Disaster Recovery (DR) plan using realistic disaster scenarios while maintaining minimal impact to business operations is simulation. Simulation is a type of test or a evaluation technique or method that assesses or analyzes the Disaster Recovery (DR) plan or the document that defines or specifies the procedures or the actions that are performed or executed by the organization, such as the business, the enterprise, or the institution, to recover or to restore the critical or the essential functions or operations of the organization, such as the services, the products, or the processes, after or during the occurrence or the happening of the disaster or the event that causes or results in the disruption, the interruption, or the damage of the functions or operations of the organization, such as the fire, the flood, or the cyberattack, by using or applying the realistic or the practical disaster scenarios or situations that mimic or imitate the occurrence or the happening of the disaster or the event that causes or results in the disruption, the interruption, or the damage of the functions or operations of the organization, such as the fire, the flood, or the cyberattack, but without affecting or impacting the actual or the real functions or operations of the organization, such as the services, the products, or the processes. Simulation is the type of test that assesses a Disaster Recovery (DR) plan using realistic disaster scenarios while maintaining minimal impact to business operations, as it can provide or offer the benefits, the advantages, or the value of the test or the evaluation technique or method, such as the verification, the validation, or the identification of the effectiveness, the efficiency, or the issues of the Disaster Recovery (DR) plan, and the improvement, the enhancement, or the update of the Disaster Recovery (DR) plan, by using or applying the realistic or the practical disaster scenarios or situations that mimic or imitate the occurrence or the happening of the disaster or the event that causes or results in the disruption, the interruption, or the damage of the functions or operations of the organization, such as the fire, the flood, or the cyberattack, and as it can also provide or offer the minimal or the low impact or effect to the actual or the real functions or operations of the organization, such as the services, the products, or the processes, by using or applying the simulated or the artificial environment or setting that does not interfere or disturb the actual or the real environment or setting of the organization, such as the network, the system, or the service.
Which of the following is the PRIMARY reason for employing physical security personnel at entry points in facilities where card access is in operation?
To verify that only employees have access to the facility.
To identify present hazards requiring remediation.
To monitor staff movement throughout the facility.
To provide a safe environment for employees.
According to the CISSP CBK Official Study Guide, the primary reason for employing physical security personnel at entry points in facilities where card access is in operation is to provide a safe environment for employees. Physical security personnel are the human or the personnel components or elements of the physical security system or the network, which is the system or the network that prevents or deters the unauthorized or unintended access or entry to the resources, data, or information, such as the locks, keys, doors, or windows of the premises or the facilities, or the badges, cards, or tags of the subjects or the entities. Physical security personnel may perform various functions or tasks, such as the guarding, patrolling, or monitoring of the premises or the facilities, or the verifying, identifying, or authenticating of the subjects or the entities. Employing physical security personnel at entry points in facilities where card access is in operation helps to provide a safe environment for employees, as it enhances or supplements the security or the protection of the premises or the facilities, as well as the resources, data, or information that are contained or stored in the premises or the facilities, by adding or applying an additional layer or level of security or protection, as well as a human or a personal touch or factor, to the physical security system or the network. Providing a safe environment for employees helps to ensure the safety or the well-being of the employees, as well as the productivity or the performance of the employees, as it reduces or eliminates the risks or the threats that may harm or damage the employees, such as the theft, vandalism, or violence of the employees. To verify that only employees have access to the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Verifying that only employees have access to the facility is the process of checking or confirming that the subjects or the entities that enter or access the facility are the employees or the authorized users or clients of the facility, by using or applying the appropriate methods or mechanisms, such as the card access, the biometrics, or the physical security personnel of the facility. Verifying that only employees have access to the facility helps to ensure the security or the integrity of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it prevents or limits the unauthorized or unintended access or entry to the facility, which may lead to the attacks or threats that may harm or damage the facility, such as the theft, vandalism, or violence of the facility. Verifying that only employees have access to the facility may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of verifying, identifying, or authenticating the subjects or the entities that enter or access the facility, by using or applying the card access, the biometrics, or other methods or mechanisms of the facility. However, verifying that only employees have access to the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation. To identify present hazards requiring remediation is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Identifying present hazards requiring remediation is the process of detecting or discovering the existing or the current hazards or dangers that may affect or impair the facility, as well as the resources, data, or information that are contained or stored in the facility, such as the fire, flood, or earthquake of the facility, as well as the actions or the measures that are needed or required to fix or resolve the hazards or dangers, such as the evacuation, recovery, or contingency of the facility. Identifying present hazards requiring remediation helps to ensure the safety or the well-being of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it reduces or eliminates the impact or the consequence of the hazards or dangers that may harm or damage the facility, such as the fire, flood, or earthquake of the facility. Identifying present hazards requiring remediation may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of detecting or discovering the existing or the current hazards or dangers that may affect or impair the facility, as well as the actions or the measures that are needed or required to fix or resolve the hazards or dangers, by using or applying the appropriate tools or techniques, such as the sensors, alarms, or cameras of the facility. However, identifying present hazards requiring remediation is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation. To monitor staff movement throughout the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Monitoring staff movement throughout the facility is the process of observing or tracking the activities or the behaviors of the staff or the employees that work or operate in the facility, such as the entry, exit, or location of the staff or the employees, by using or applying the appropriate methods or mechanisms, such as the card access, the biometrics, or the physical security personnel of the facility. Monitoring staff movement throughout the facility helps to ensure the security or the integrity of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it prevents or limits the unauthorized or unintended access or entry to the facility, which may lead to the attacks or threats that may harm or damage the facility, such as the theft, vandalism, or violence of the facility. Monitoring staff movement throughout the facility may also help to ensure the productivity or the performance of the staff or the employees, as it prevents or limits the misuse or abuse of the facility, such as the idle, waste, or fraud of the facility. Monitoring staff movement throughout the facility may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of observing or tracking the activities or the behaviors of the staff or the employees that work or operate in the facility, by using or applying the card access, the biometrics, or other methods or mechanisms of the facility. However, monitoring staff movement throughout the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation.
Which of the following information MUST be provided for user account provisioning?
Full name
Unique identifier
Security question
Date of birth
According to the CISSP CBK Official Study Guide1, the information that must be provided for user account provisioning is the unique identifier. User account provisioning is the process of creating, managing, and deleting user accounts or identities in the system or the network, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the system or the network. User account provisioning helps to ensure the security or the integrity of the system or the network, as well as the resources, data, or information that are accessed or used by the user accounts or identities, by enforcing or implementing the principles or the concepts of the identification, authentication, authorization, or accountability of the user accounts or identities. The information that must be provided for user account provisioning is the unique identifier, as it is the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network, such as the username, the email address, or the employee number of the user account or identity. The unique identifier helps to ensure the security or the integrity of the system or the network, as well as the resources, data, or information that are accessed or used by the user account or identity, by preventing or avoiding the duplication, confusion, or collision of the user account or identity with other user accounts or identities in the system or the network, which may lead to the attacks or threats that may compromise or harm the system or the network, such as the impersonation, spoofing, or masquerading of the user account or identity. Full name is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Full name is the information that consists of the first name, middle name, and last name of the user account or identity, which is used or applied to represent or display the user account or identity in the system or the network, such as the John Smith, Jane Doe, or Alice Cooper of the user account or identity. Full name helps to provide a more human or personal touch or factor to the user account or identity, as well as to facilitate or enhance the communication or the interaction of the user account or identity with other user accounts or identities in the system or the network. Full name may be a benefit or a consequence of providing the unique identifier, as the unique identifier may be derived or generated from the full name, or the full name may be associated or linked with the unique identifier, of the user account or identity. However, full name is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. Security question is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Security question is the information that consists of a question and an answer that are related or relevant to the user account or identity, which are used or applied to verify or confirm the user account or identity in the system or the network, such as the What is your mother’s maiden name?, What is your favorite color?, or What is the name of your first pet? of the user account or identity. Security question helps to provide an additional layer or level of security or protection to the user account or identity, as well as to facilitate or enhance the recovery or the reset of the user account or identity in the system or the network, in the event of the loss, forgetfulness, or compromise of the user account or identity, such as the password, username, or email address of the user account or identity. Security question may be a benefit or a consequence of providing the unique identifier, as the security question may be derived or generated from the unique identifier, or the security question may be associated or linked with the unique identifier, of the user account or identity. However, security question is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. Date of birth is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Date of birth is the information that consists of the day, month, and year of the birth of the user account or identity, which is used or applied to represent or display the age or the birthday of the user account or identity in the system or the network, such as the 01/01/2000, 31/12/1999, or 29/02/2000 of the user account or identity. Date of birth helps to provide a more human or personal touch or factor to the user account or identity, as well as to facilitate or enhance the communication or the interaction of the user account or identity with other user accounts or identities in the system or the network. Date of birth may be a benefit or a consequence of providing the unique identifier, as the date of birth may be derived or generated from the unique identifier, or the date of birth may be associated or linked with the unique identifier, of the user account or identity. However, date of birth is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. References: 1
An organization regularly conducts its own penetration tests. Which of the following scenarios MUST be covered for the test to be effective?
Third-party vendor with access to the system
System administrator access compromised
Internal attacker with access to the system
Internal user accidentally accessing data
According to the CXL blog1, the scenario that must be covered for the penetration test to be effective is the third-party vendor with access to the system. A third-party vendor is an external entity or organization that provides a service or a product to the organization, such as a software developer, a cloud provider, or a payment processor. A third-party vendor with access to the system is a potential source of vulnerability or risk for the organization, as it may introduce or expose some weaknesses or flaws in the system, such as the configuration, the authentication, or the encryption of the system. A third-party vendor with access to the system may also be a target or a vector of attack for the malicious users or hackers, as it may be compromised or exploited to gain unauthorized or unintended access to the system, or to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. Therefore, the scenario of the third-party vendor with access to the system must be covered for the penetration test to be effective, as it helps to identify and assess the security gaps or issues that may arise from the third-party vendor’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. System administrator access compromised is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. A system administrator is an internal entity or person that manages and maintains the system, such as the network, the server, or the database of the organization. A system administrator access compromised is a scenario in which the system administrator’s account or credentials are stolen, hacked, or misused by the malicious users or hackers, who can then access or use the system with the system administrator’s privileges or permissions, such as creating, modifying, or deleting the users, the data, or the settings of the system. A system administrator access compromised is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the system administrator’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, a system administrator access compromised is not the scenario that must be covered for the penetration test to be effective, as it is not a common or realistic scenario that occurs in the real world, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal attacker is an internal entity or person that performs malicious actions or activities on the system, such as an employee, a contractor, or a partner of the organization. An internal attacker with access to the system is a scenario in which the internal attacker uses their legitimate or illegitimate access to the system to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. An internal attacker with access to the system is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal attacker’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal user is an internal entity or person that uses the system for legitimate purposes or functions, such as an employee, a contractor, or a partner of the organization. An internal user accidentally accessing data is a scenario in which the internal user unintentionally or mistakenly accesses or views the data or information on the system that they are not supposed to access or view, such as the confidential, sensitive, or personal data or information of the organization or the customers. An internal user accidentally accessing data is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal user’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, as it is not a malicious or intentional scenario that poses a serious threat or risk to the system, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. References: 1
How does a Host Based Intrusion Detection System (HIDS) identify a potential attack?
Examines log messages or other indications on the system.
Monitors alarms sent to the system administrator
Matches traffic patterns to virus signature files
Examines the Access Control List (ACL)
According to the CISSP All-in-One Exam Guide3, a Host Based Intrusion Detection System (HIDS) identifies a potential attack by examining log messages or other indications on the system. This means that a HIDS is a type of intrusion detection system that monitors the activities and events that occur on a specific host, such as a server or a workstation, and analyzes them for signs of malicious or unauthorized behavior. A HIDS can examine various sources of data on the host, such as system logs, audit trails, registry entries, file system changes, network connections, and so on. A HIDS does not identify a potential attack by monitoring alarms sent to the system administrator, as this is a function of the intrusion detection system management console, which receives and displays the alerts generated by the HIDS. A HIDS does not identify a potential attack by matching traffic patterns to virus signature files, as this is a function of an antivirus software, which scans the incoming and outgoing data for known malware signatures. A HIDS does not identify a potential attack by examining the Access Control List (ACL), as this is a mechanism that defines the permissions and restrictions for accessing a resource, not a source of intrusion detection data.
By carefully aligning the pins in the lock, which of the following defines the opening of a mechanical lock without the proper key?
Lock pinging
Lock picking
Lock bumping
Lock bricking
The opening of a mechanical lock without the proper key by carefully aligning the pins in the lock is defined as lock picking. A mechanical lock is a device that secures an entry point, such as a door, a window, or a cabinet, by using a physical mechanism, such as a pin tumbler, a wafer, or a disc detainer. A mechanical lock requires a proper key to unlock it, which matches the configuration of the mechanism. Lock picking is a technique of manipulating the mechanism of the lock without the proper key, usually by using tools, such as picks, tension wrenches, or rakes. Lock picking can exploit the flaws or weaknesses of the lock, such as the variations, tolerances, or defects of the pins or the cylinder. Lock picking can be used for legitimate purposes, such as locksmithing, hobby, or sport, or for illegitimate purposes, such as burglary, espionage, or sabotage.References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Engineering, p. 136; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Engineering, p. 297.
In order for a security policy to be effective within an organization, it MUST include
strong statements that clearly define the problem.
a list of all standards that apply to the policy.
owner information and date of last revision.
disciplinary measures for non compliance.
In order for a security policy to be effective within an organization, it must include disciplinary measures for non compliance. A security policy is a document or a statement that defines and communicates the security goals, the objectives, or the expectations of the organization, and that provides the guidance or the direction for the security activities, the processes, or the functions of the organization. A security policy must include disciplinary measures for non compliance, which are the actions or the consequences that the organization will take or impose on the users or the devices that violate or disregard the security policy or the security rules. Disciplinary measures for non compliance can help ensure the effectiveness of the security policy, as they can deter or prevent the users or the devices from engaging in the behaviors or the practices that could jeopardize or undermine the security of the organization, and they can also enforce or reinforce the accountability or the responsibility of the users or the devices for the security of the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 18; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 26
Which of the following is the PRIMARY security concern associated with the implementation of smart cards?
The cards have limited memory
Vendor application compatibility
The cards can be misplaced
Mobile code can be embedded in the card
The primary security concern associated with the implementation of smart cards is that the cards can be misplaced, lost, stolen, or damaged, resulting in the compromise of the user’s identity, credentials, or data stored on the card. The other options are not the primary security concern, but rather secondary or minor issues. The cards have limited memory, which may affect the performance or functionality of the card, but not the security. Vendor application compatibility may affect the interoperability or usability of the card, but not the security. Mobile code can be embedded in the card, which may introduce malicious or unauthorized functionality, but this is a rare and sophisticated attack that requires physical access to the card and specialized equipment. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, p. 275; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, p. 348.
The application of which of the following standards would BEST reduce the potential for data breaches?
ISO 9000
ISO 20121
ISO 26000
ISO 27001
The standard that would best reduce the potential for data breaches is ISO 27001. ISO 27001 is an international standard that specifies the requirements and the guidelines for establishing, implementing, maintaining, and improving an information security management system (ISMS) within an organization. An ISMS is a systematic approach to managing the information security of the organization, by applying the principles of plan-do-check-act (PDCA) cycle, and by following the best practices of risk assessment, risk treatment, security controls, monitoring, review, and improvement. ISO 27001 can help reduce the potential for data breaches, as it can provide a framework and a methodology for the organization to identify, protect, detect, respond, and recover from the information security incidents or events that could compromise the confidentiality, integrity, or availability of the data or the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 25; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 33
Which of the following is the PRIMARY concern when using an Internet browser to access a cloud-based service?
Insecure implementation of Application Programming Interfaces (API)
Improper use and storage of management keys
Misconfiguration of infrastructure allowing for unauthorized access
Vulnerabilities within protocols that can expose confidential data
The primary concern when using an Internet browser to access a cloud-based service is the vulnerabilities within protocols that can expose confidential data. Protocols are the rules and formats that govern the communication and exchange of data between systems or applications. Protocols can have vulnerabilities or flaws that can be exploited by attackers to intercept, modify, or steal the data. For example, some protocols may not provide adequate encryption, authentication, or integrity for the data, or they may have weak or outdated algorithms, keys, or certificates. When using an Internet browser to access a cloud-based service, the data may be transmitted over various protocols, such as HTTP, HTTPS, SSL, TLS, etc. If any of these protocols are vulnerable, the data may be compromised, especially if the data is sensitive or confidential. Therefore, it is important to use secure and updated protocols, as well as to monitor and patch any vulnerabilities12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 338; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 456.
Which of the following adds end-to-end security inside a Layer 2 Tunneling Protocol (L2TP) Internet Protocol Security (IPSec) connection?
Temporal Key Integrity Protocol (TKIP)
Secure Hash Algorithm (SHA)
Secure Shell (SSH)
Transport Layer Security (TLS)
According to the CISSP CBK Official Study Guide1, the protocol that adds end-to-end security inside a Layer 2 Tunneling Protocol (L2TP) Internet Protocol Security (IPSec) connection is Transport Layer Security (TLS). L2TP is a tunneling protocol that is used or applied to create or establish a virtual private network (VPN) connection between two or more systems or networks, by using or applying the Layer 2 or the data-link layer of the Open System Interconnection (OSI) Reference Model, which is the layer that defines the logical or the intangible aspects of the communication or the networking system, such as the frames, addresses, or protocols that organize or control the data or the information. IPSec is a security protocol that is used or applied to secure or protect the VPN connection between two or more systems or networks, by using or applying the Layer 3 or the network layer of the OSI Reference Model, which is the layer that defines the routing or the forwarding aspects of the communication or the networking system, such as the packets, addresses, or protocols that determine or direct the path or the route of the data or the information. IPSec helps to ensure the security or the integrity of the VPN connection between two or more systems or networks, by providing or supporting the confidentiality, integrity, authentication, or non-repudiation of the data or the information that passes through the VPN connection, by using or applying the appropriate methods or mechanisms, such as the encryption, hashing, digital signature, or key exchange of the data or the information. TLS is a security protocol that is used or applied to secure or protect the application or the service that runs or operates inside the VPN connection between two or more systems or networks, by using or applying the Layer 4 or the transport layer of the OSI Reference Model, which is the layer that defines the reliability or the quality aspects of the communication or the networking system, such as the segments, ports, or protocols that ensure or verify the delivery or the transmission of the data or the information. TLS helps to ensure the security or the integrity of the application or the service that runs or operates inside the VPN connection between two or more systems or networks, by providing or supporting the confidentiality, integrity, authentication, or non-repudiation of the data or the information that passes through the application or the service, by using or applying the appropriate methods or mechanisms, such as the encryption, hashing, digital signature, or key exchange of the data or the information. TLS adds end-to-end security inside the L2TP IPSec connection, as it provides or supports an additional layer or level of security or protection to the data or the information that passes through the VPN connection between two or more systems or networks, as well as the application or the service that runs or operates inside the VPN connection between two or more systems or networks, by securing or protecting the data or the information from the source or the origin to the destination or the end of the VPN connection, as well as the application or the service, regardless or irrespective of the intermediate or the intermediate systems or networks that are involved or included in the VPN connection, as well as the application or the service. TKIP is not the protocol that adds end-to-end security inside the L2TP IPSec connection, although it may be a benefit or a consequence of using TLS.
Data remanence refers to which of the following?
The remaining photons left in a fiber optic cable after a secure transmission.
The retention period required by law or regulation.
The magnetic flux created when removing the network connection from a server or personal computer.
The residual information left on magnetic storage media after a deletion or erasure.
Data remanence refers to the residual information left on magnetic storage media after a deletion or erasure. Data remanence is a security risk, as it may allow unauthorized or malicious parties to recover the deleted or erased data, which may contain sensitive or confidential information. Data remanence can be caused by the physical properties of the magnetic storage media, such as hard disks, floppy disks, or tapes, which may retain some traces of the data even after it is overwritten or formatted. Data remanence can also be caused by the logical properties of the file systems or operating systems, which may not delete or erase the data completely, but only mark the space as available or remove the pointers to the data. Data remanence can be prevented or reduced by using secure deletion or erasure methods, such as cryptographic wiping, degaussing, or physical destruction56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 443; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 855.
Which of the following command line tools can be used in the reconnaisance phase of a network vulnerability assessment?
dig
ifconfig
ipconfig
nbtstat
Dig is a command line tool that can be used in the reconnaissance phase of a network vulnerability assessment. Dig stands for domain information groper, and it is used to query Domain Name System (DNS) servers and obtain information about domains, hosts, and records. Dig can help discover the network topology, the IP addresses, and the services running on the target network.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 411; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 365
Which Web Services Security (WS-Security) specification negotiates how security tokens will be issued, renewed and validated? Click on the correct specification in the image below.
WS-Trust
WS-Trust is a Web Services Security (WS-Security) specification that negotiates how security tokens will be issued, renewed and validated. WS-Trust defines a framework for establishing trust relationships between different parties, and a protocol for requesting and issuing security tokens that can be used to authenticate and authorize the parties. WS-Trust also supports different types of security tokens, such as Kerberos tickets, X.509 certificates, SAML assertions, etc56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 346; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 465.
Which of the following PRIMARILY contributes to security incidents in web-based applications?
Systems administration and operating systems
System incompatibility and patch management
Third-party applications and change controls
Improper stress testing and application interfaces
Improper stress testing and application interfaces primarily contribute to security incidents in web-based applications. Stress testing is a type of performance testing that evaluates how a web-based application behaves under extreme or abnormal conditions, such as high traffic, heavy load, or limited resources. Stress testing can help identify the potential bottlenecks, errors, or failures that may affect the functionality, reliability, or security of the web-based application. Improper stress testing can result in undetected vulnerabilities or weaknesses that can be exploited by attackers. Application interfaces are the points of interaction between a web-based application and other systems, components, or users. Application interfaces can include web services, APIs, user interfaces, or network protocols. Application interfaces can introduce security risks if they are not designed, implemented, or secured properly. For example, application interfaces may expose sensitive data, allow unauthorized access, or enable injection attacks56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, p. 491; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, p. 1009.
Which technology is a prerequisite for populating the cloud-based directory in a federated identity solution?
Notification tool
Message queuing tool
Security token tool
Synchronization tool
Which of the following is an essential step before performing Structured Query Language (SQL) penetration tests on a production system?
Verify countermeasures have been deactivated.
Ensure firewall logging has been activated.
Validate target systems have been backed up.
Confirm warm site is ready to accept connections.
An essential step before performing SQL penetration tests on a production system is to validate that the target systems have been backed up. SQL penetration tests are a type of security testing that involves injecting malicious SQL commands or queries into a database or application to exploit vulnerabilities or gain unauthorized access. Performing SQL penetration tests on a production system can cause data loss, corruption, or modification, as well as system downtime or instability. Therefore, it is important to ensure that the target systems have been backed up before conducting the tests, so that the data and system can be restored in case of any damage or disruption. The other options are not essential steps, but rather optional or irrelevant steps. Verifying countermeasures have been deactivated is not an essential step, but rather a risky and unethical step, as it can expose the system to other attacks or compromise the validity of the test results. Ensuring firewall logging has been activated is not an essential step, but rather a good practice, as it can help to monitor and record the test activities and outcomes. Confirming warm site is ready to accept connections is not an essential step, but rather a contingency plan, as it can provide an alternative site for continuing the system operations in case of a major failure or disaster. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 9, p. 471; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 417.
Which of the following is the BEST method to reduce the effectiveness of phishing attacks?
User awareness
Two-factor authentication
Anti-phishing software
Periodic vulnerability scan
According to the CISSP For Dummies4, the best method to reduce the effectiveness of phishing attacks is user awareness. This means that the users should be educated and trained on how to recognize and avoid phishing emails and websites, which are fraudulent attempts to obtain sensitive information or credentials from the users by impersonating legitimate entities or persons. User awareness can help users to identify the common signs and indicators of phishing, such as spoofed sender addresses, misleading links, spelling and grammar errors, urgent or threatening messages, and requests for personal or financial information. User awareness can also help users to follow the best practices and preventive measures to protect themselves from phishing, such as verifying the source and content of the messages, using strong and unique passwords, enabling two-factor authentication, reporting and deleting suspicious messages, and using anti-phishing software and tools. Two-factor authentication is not the best method to reduce the effectiveness of phishing attacks, as it may not prevent the users from falling for phishing in the first place. Two-factor authentication is a security mechanism that requires the users to provide two pieces of evidence to prove their identity, such as a password and a one-time code. However, some phishing attacks may be able to bypass or compromise two-factor authentication, such as by using man-in-the-middle techniques, intercepting the codes, or tricking the users into entering the codes on fake websites. Anti-phishing software is not the best method to reduce the effectiveness of phishing attacks, as it may not detect or block all phishing attempts. Anti-phishing software is a software application that helps the users to identify and avoid phishing emails and websites, by using various methods such as blacklists, whitelists, heuristics, and machine learning. However, anti-phishing software may not be able to keep up with the evolving and sophisticated techniques of phishing, such as using encryption, obfuscation, or personalization. Anti-phishing software may also generate false positives or negatives, which may confuse or mislead the users. Periodic vulnerability scan is not the best method to reduce the effectiveness of phishing attacks, as it may not address the human factor of phishing. Periodic vulnerability scan is a process that scans and tests the network, systems, and applications for potential weaknesses and exposures that may be exploited by attackers. However, phishing attacks mainly target the users, not the technical vulnerabilities, by exploiting their emotions, curiosity, or trust. Periodic vulnerability scan may not be able to prevent or detect phishing attacks, unless they are combined with user awareness and education. References: 4
What type of encryption is used to protect sensitive data in transit over a network?
Payload encryption and transport encryption
Authentication Headers (AH)
Keyed-Hashing for Message Authentication
Point-to-Point Encryption (P2PE)
The type of encryption that is used to protect sensitive data in transit over a network is payload encryption and transport encryption. Encryption is the process of transforming or encoding the data or the information into an unreadable or unintelligible form, using a secret key or algorithm, to protect the data or the information from unauthorized access or disclosure. Payload encryption and transport encryption are the two types of encryption that are used to protect sensitive data in transit over a network, which means that the data or the information is being transmitted or communicated from one point to another point over a network, such as the internet, a local area network (LAN), or a wide area network (WAN).
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 115; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 172
The BEST example of the concept of "something that a user has" when providing an authorized user access to a computing system is
the user's hand geometry.
a credential stored in a token.
a passphrase.
the user's face.
Which of the following is the BIGGEST weakness when using native Lightweight Directory Access Protocol (LDAP) for authentication?
Authorizations are not included in the server response
Unsalted hashes are passed over the network
The authentication session can be replayed
Passwords are passed in clear text
The biggest weakness when using native Lightweight Directory Access Protocol (LDAP) for authentication is that passwords are passed in clear text over the network, exposing them to eavesdropping and interception attacks. To mitigate this risk, LDAP should be used with encryption protocols, such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), or with authentication protocols, such as Kerberos or Simple Authentication and Security Layer (SASL).
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 281; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 230
Which of the following would BEST describe the role directly responsible for data within an organization?
Data custodian
Information owner
Database administrator
Quality control
According to the CISSP For Dummies, the role that is directly responsible for data within an organization is the information owner. The information owner is the person or role that has the authority and accountability for the data or information that the organization owns, creates, uses, or maintains, such as data, documents, records, or intellectual property. The information owner is responsible for defining the classification, value, and sensitivity of the data or information, as well as the security requirements, policies, and standards for the data or information. The information owner is also responsible for granting or revoking the access rights and permissions to the data or information, as well as for monitoring and auditing the compliance and effectiveness of the security controls and mechanisms for the data or information. The data custodian is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner. The data custodian is the person or role that has the responsibility for implementing and maintaining the security controls and mechanisms for the data or information, as defined by the information owner. The data custodian is responsible for performing the technical and operational tasks and activities for the data or information, such as backup, recovery, encryption, or disposal. The database administrator is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner or the data custodian. The database administrator is the person or role that has the responsibility for managing and administering the database system that stores and processes the data or information. The database administrator is responsible for performing the technical and operational tasks and activities for the database system, such as installation, configuration, optimization, or troubleshooting.
Which Radio Frequency Interference (RFI) phenomenon associated with bundled cable runs can create information leakage?
Transference
Covert channel
Bleeding
Cross-talk
Cross-talk is a type of Radio Frequency Interference (RFI) phenomenon that occurs when signals from one cable or circuit interfere with signals from another cable or circuit. Cross-talk can create information leakage by allowing an attacker to eavesdrop on or modify the transmitted data. Cross-talk can be caused by electromagnetic induction, capacitive coupling, or common impedance coupling. Cross-talk can be reduced by using shielded cables, twisted pairs, or optical fibers123. References:
In configuration management, what baseline configuration information MUST be maintained for each computer system?
Operating system and version, patch level, applications running, and versions.
List of system changes, test reports, and change approvals
Last vulnerability assessment report and initial risk assessment report
Date of last update, test report, and accreditation certificate
Baseline configuration information is the set of data that describes the state of a computer system at a specific point in time. It is used to monitor and control changes to the system, as well as to assess its compliance with security standards and policies. Baseline configuration information must include the operating system and version, patch level, applications running, and versions, because these are the essential components that define the functionality and security of the system. These components can also affect the compatibility and interoperability of the system with other systems and networks. Therefore, it is important to maintain accurate and up-to-date records of these components for each computer system123. References:
A vulnerability in which of the following components would be MOST difficult to detect?
Kernel
Shared libraries
Hardware
System application
According to the CISSP CBK Official Study Guide, a vulnerability in hardware would be the most difficult to detect. A vulnerability is a weakness or exposure in a system, network, or application, which may be exploited by threats and cause harm to the organization or its assets. A vulnerability can exist in various components of a system, network, or application, such as the kernel, the shared libraries, the hardware, or the system application. A vulnerability in hardware would be the most difficult to detect, as it may require physical access, specialized tools, or advanced skills to identify and measure the vulnerability. Hardware is the physical or tangible component of a system, network, or application that provides the basic functionality, performance, and support for the system, network, or application, such as the processor, memory, disk, or network card. Hardware may have vulnerabilities due to design flaws, manufacturing defects, configuration errors, or physical damage. A vulnerability in hardware may affect the security, reliability, or availability of the system, network, or application, such as causing data leakage, performance degradation, or system failure. A vulnerability in the kernel would not be the most difficult to detect, although it may be a difficult to detect. The kernel is the core or central component of a system, network, or application that provides the basic functionality, performance, and control for the system, network, or application, such as the operating system, the hypervisor, or the firmware. The kernel may have vulnerabilities due to design flaws, coding errors, configuration errors, or malicious modifications. A vulnerability in the kernel may affect the security, reliability, or availability of the system, network, or application, such as causing privilege escalation, system compromise, or system crash. A vulnerability in the kernel may be detected by using various tools, techniques, or methods, such as code analysis, vulnerability scanning, or penetration testing. A vulnerability in the shared libraries would not be the most difficult to detect, although it may be a difficult to detect. The shared libraries are the reusable or common components of a system, network, or application, that provide the functionality, performance, and compatibility for the system, network, or application, such as the dynamic link libraries, the application programming interfaces, or the frameworks.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
The third party needs to have
processes that are identical to that of the organization doing the outsourcing.
access to the original personnel that were on staff at the organization.
the ability to maintain all of the applications in languages they are familiar with.
access to the skill sets consistent with the programming languages used by the organization.
The third party needs to have access to the skill sets consistent with the programming languages used by the organization. The programming languages are the tools or the methods of creating, modifying, testing, and supporting the software applications that perform the functions or the tasks required by the organization. The programming languages can vary in their syntax, semantics, features, or paradigms, and they can require different levels of expertise or experience to use them effectively or efficiently. The third party needs to have access to the skill sets consistent with the programming languages used by the organization, as it can ensure the quality, the compatibility, and the maintainability of the software applications that the third party is responsible for. The third party does not need to have processes that are identical to that of the organization doing the outsourcing, access to the original personnel that were on staff at the organization, or the ability to maintain all of the applications in languages they are familiar with, as they are related to the methods, the resources, or the preferences of the software development, not the skill sets consistent with the programming languages used by the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1000. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1016.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement.
The effectiveness of the security program can PRIMARILY be measured through
audit findings.
risk elimination.
audit requirements.
customer satisfaction.
The primary way to measure the effectiveness of the security program is through the audit findings. The audit findings are the results or the outcomes of the audit process, which is a systematic and independent examination of the security activities and initiatives, to determine whether they comply with the security policies and standards, and whether they achieve the security objectives and goals. The audit findings can help to evaluate the effectiveness of the security program, as they can identify and report the strengths and the weaknesses, the successes and the failures, and the gaps and the risks of the security program, and they can provide the recommendations and the feedback for the improvement and the enhancement of the security program. Risk elimination, audit requirements, and customer satisfaction are not the primary ways to measure the effectiveness of the security program, as they are related to the impossibility, the necessity, or the quality of the security program, not the evaluation or the assessment of the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 54.
A thorough review of an organization's audit logs finds that a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient. What type of attack has MOST likely occurred?
Spoofing
Eavesdropping
Man-in-the-middle
Denial of service
The type of attack that has most likely occurred when a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient is a man-in-the-middle (MITM) attack. A MITM attack is a type of attack that involves an attacker intercepting, modifying, or redirecting the communication between two parties, without their knowledge or consent. The attacker can alter, delete, or inject data, or impersonate one of the parties, to achieve malicious goals, such as stealing information, compromising security, or disrupting service. A MITM attack can be performed on various types of networks or protocols, such as email, web, or wireless. Spoofing, eavesdropping, and denial of service are not the types of attack that have most likely occurred in this scenario, as they do not involve the modification or manipulation of the communication between the parties, but rather the falsification, observation, or prevention of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
What additional considerations are there if the third party is located in a different country?
The organizational structure of the third party and how it may impact timelines within the organization
The ability of the third party to respond to the organization in a timely manner and with accurate information
The effects of transborder data flows and customer expectations regarding the storage or processing of their data
The quantity of data that must be provided to the third party and how it is to be used
The additional considerations that are there if the third party is located in a different country are the effects of transborder data flows and customer expectations regarding the storage or processing of their data. Transborder data flows are the movements or the transfers of data across the national or the regional borders, such as the internet, the cloud, or the outsourcing. Transborder data flows can have various effects on the security, the privacy, the compliance, or the sovereignty of the data, depending on the laws, the regulations, the standards, or the cultures of the different countries or regions involved. Customer expectations are the beliefs or the assumptions of the customers about the quality, the performance, or the satisfaction of the products or the services that they use or purchase. Customer expectations can vary depending on the needs, the preferences, or the values of the customers, and they can influence the reputation, the loyalty, or the profitability of the organization. The organization should consider the effects of transborder data flows and customer expectations regarding the storage or processing of their data, as they can affect the security, the privacy, the compliance, or the sovereignty of the data, and they can impact the reputation, the loyalty, or the profitability of the organization. The organization should also consider the legal, the contractual, the ethical, or the cultural implications of the transborder data flows and customer expectations, and they should communicate, negotiate, or align with the third party and the customers accordingly. The organization should not consider the organizational structure of the third party and how it may impact timelines within the organization, the ability of the third party to respond to the organization in a timely manner and with accurate information, or the quantity of data that must be provided to the third party and how it is to be used, as they are related to the management, the communication, or the provision of the data, not the effects of transborder data flows and customer expectations regarding the storage or processing of their data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 59. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 74.
What is a common challenge when implementing Security Assertion Markup Language (SAML) for identity integration between on-premise environment and an external identity provider service?
Some users are not provisioned into the service.
SAML tokens are provided by the on-premise identity provider.
Single users cannot be revoked from the service.
SAML tokens contain user information.
A common challenge when implementing SAML for identity integration between on-premise environment and an external identity provider service is that some users are not provisioned into the service. Provisioning is a process of creating, updating, or deleting the user accounts or profiles in a service or an application, based on the user identity or credentials. When implementing SAML for identity integration, the on-premise environment acts as the identity provider, which authenticates the user and issues the SAML assertion, and the external service acts as the service provider, which receives the SAML assertion and grants access to the user. However, if the user account or profile is not provisioned or synchronized in the external service, the user may not be able to access the service, even if they have a valid SAML assertion. Therefore, a common challenge when implementing SAML for identity integration is to ensure that the user provisioning is consistent and accurate between the on-premise environment and the external service. SAML tokens are provided by the on-premise identity provider, single users can be revoked from the service, and SAML tokens contain user information are not common challenges when implementing SAML for identity integration, as they are related to the functionality, granularity, or content of the SAML protocol, not the provisioning of the user accounts or profiles. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 693. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 709.
Which of the following violates identity and access management best practices?
User accounts
System accounts
Generic accounts
Privileged accounts
The type of accounts that violates identity and access management best practices is generic accounts. Generic accounts are accounts that are shared by multiple users or devices, and do not have a specific or unique identity associated with them. Generic accounts are often used for convenience, compatibility, or legacy reasons, but they pose a serious security risk, as they can compromise the accountability, traceability, and auditability of the actions and activities performed by the users or devices. Generic accounts can also enable unauthorized or malicious access, as they may have weak or default passwords, or may not have proper access control or monitoring mechanisms. User accounts, system accounts, and privileged accounts are not the types of accounts that violate identity and access management best practices, as they are accounts that have a specific or unique identity associated with them, and can be subject to proper authentication, authorization, and auditing measures. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
When using third-party software developers, which of the following is the MOST effective method of providing software development Quality Assurance (QA)?
Retain intellectual property rights through contractual wording.
Perform overlapping code reviews by both parties.
Verify that the contractors attend development planning meetings.
Create a separate contractor development environment.
When using third-party software developers, the most effective method of providing software development Quality Assurance (QA) is to perform overlapping code reviews by both parties. Code reviews are the process of examining the source code of an application for quality, functionality, security, and compliance. Overlapping code reviews by both parties means that the code is reviewed by both the third-party developers and the contracting organization, and that the reviews cover the same or similar aspects of the code. This can ensure that the code meets the requirements and specifications, that the code is free of defects or vulnerabilities, and that the code is consistent and compatible with the existing system or environment. Retaining intellectual property rights through contractual wording, verifying that the contractors attend development planning meetings, and creating a separate contractor development environment are all possible methods of providing software development QA, but they are not the most effective method of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1026. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1050.
If an attacker in a SYN flood attack uses someone else's valid host address as the source address, the system under attack will send a large number of Synchronize/Acknowledge (SYN/ACK) packets to the
default gateway.
attacker's address.
local interface being attacked.
specified source address.
A SYN flood attack is a type of denial-of-service attack that exploits the three-way handshake mechanism of the Transmission Control Protocol (TCP). The attacker sends a large number of TCP packets with the SYN flag set, indicating a request to establish a connection, to the target system, using a spoofed source address. The target system responds with a TCP packet with the SYN and ACK flags set, indicating an acknowledgment of the request, and waits for a final TCP packet with the ACK flag set, indicating the completion of the handshake, from the source address. However, since the source address is fake, the final ACK packet never arrives, and the target system keeps the connection half-open, consuming its resources and preventing legitimate connections. Therefore, the system under attack will send a large number of SYN/ACK packets to the specified source address, which is the spoofed address used by the attacker. The default gateway, the attacker’s address, and the local interface being attacked are not the destinations of the SYN/ACK packets in a SYN flood attack. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 460. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 476.
Which of the following is the PRIMARY benefit of a formalized information classification program?
It drives audit processes.
It supports risk assessment.
It reduces asset vulnerabilities.
It minimizes system logging requirements.
A formalized information classification program is a set of policies and procedures that define the categories, criteria, and responsibilities for classifying information assets according to their value, sensitivity, and criticality. The primary benefit of such a program is that it supports risk assessment, which is the process of identifying, analyzing, and evaluating the risks to the information assets and the organization. By classifying information assets, the organization can prioritize the protection of the most important and vulnerable assets, determine the appropriate security controls and measures, and allocate the necessary resources and budget. It drives audit processes, it reduces asset vulnerabilities, and it minimizes system logging requirements are all possible benefits of a formalized information classification program, but they are not the primary benefit of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 52.
Multi-Factor Authentication (MFA) is necessary in many systems given common types of password attacks. Which of the following is a correct list of password attacks?
Masquerading, salami, malware, polymorphism
Brute force, dictionary, phishing, keylogger
Zeus, netbus, rabbit, turtle
Token, biometrics, IDS, DLP
The correct list of password attacks is brute force, dictionary, phishing, and keylogger. Password attacks are the attacks that aim to guess, crack, or steal the passwords or the credentials of the users or the systems, and to gain unauthorized or malicious access to the information or the resources. Password attacks can include the following methods: - Brute force is a method that tries all possible combinations of characters or symbols until the correct password is found. - Dictionary is a method that uses a list of common or likely words or phrases as the input for guessing the password. - Phishing is a method that uses fraudulent emails or websites that impersonate legitimate entities or parties, and that trick the users into revealing their passwords or credentials. - Keylogger is a method that uses a software or a hardware device that records the keystrokes of the users, and that captures or transmits their passwords or credentials. Masquerading, salami, malware, and polymorphism are not password attacks, as they are related to the impersonation, manipulation, infection, or mutation of the data or the systems, not the guessing, cracking, or stealing of the passwords or the credentials. Zeus, netbus, rabbit, and turtle are not password attacks, as they are the names of specific types of malware, such as trojans, worms, or viruses, not the methods of attacking the passwords or the credentials. Token, biometrics, IDS, and DLP are not password attacks, as they are the types of security controls or technologies, such as authentication, identification, detection, or prevention, not the attacks on the passwords or the credentials. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 684. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 700.
When dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS), an organization that shares card holder information with a service provider MUST do which of the following?
Perform a service provider PCI-DSS assessment on a yearly basis.
Validate the service provider's PCI-DSS compliance status on a regular basis.
Validate that the service providers security policies are in alignment with those of the organization.
Ensure that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis.
The action that an organization that shares card holder information with a service provider must do when dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS) is to validate the service provider’s PCI-DSS compliance status on a regular basis. PCI-DSS is a set of security standards that applies to any organization that stores, processes, or transmits card holder data, such as credit or debit card information. PCI-DSS aims to protect the card holder data from unauthorized access, use, disclosure, or theft, and to ensure the security and integrity of the payment transactions. If an organization shares card holder data with a service provider, such as a payment processor, a hosting provider, or a cloud provider, the organization is still responsible for the security and compliance of the card holder data, and must ensure that the service provider also meets the PCI-DSS requirements. The organization must validate the service provider’s PCI-DSS compliance status on a regular basis, by obtaining and reviewing the service provider’s PCI-DSS assessment reports, such as the Self-Assessment Questionnaire (SAQ), the Report on Compliance (ROC), or the Attestation of Compliance (AOC). Performing a service provider PCI-DSS assessment on a yearly basis, validating that the service provider’s security policies are in alignment with those of the organization, and ensuring that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis are not the actions that an organization that shares card holder information with a service provider must do when dealing with compliance with PCI-DSS, as they are not sufficient or relevant to verify the service provider’s PCI-DSS compliance status or to protect the card holder data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
What is the MOST effective method for gaining unauthorized access to a file protected with a long complex password?
Brute force attack
Frequency analysis
Social engineering
Dictionary attack
The most effective method for gaining unauthorized access to a file protected with a long complex password is social engineering. Social engineering is a type of attack that exploits the human factor or the psychological weaknesses of the target, such as trust, curiosity, greed, or fear, to manipulate them into revealing sensitive information, such as passwords, or performing malicious actions, such as opening malicious attachments or clicking malicious links. Social engineering can bypass the technical security controls, such as encryption or authentication, and can be more efficient and successful than other methods that rely on brute force or guesswork. Brute force attack, frequency analysis, and dictionary attack are not the most effective methods for gaining unauthorized access to a file protected with a long complex password, as they require a lot of time, resources, and computing power, and they can be thwarted by the use of strong passwords, password policies, or password managers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, Security Assessment and Testing, page 813. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, Security Assessment and Testing, page 829.
Which of the following is the MOST beneficial to review when performing an IT audit?
Audit policy
Security log
Security policies
Configuration settings
The most beneficial item to review when performing an IT audit is the security log. The security log is a record of the events and activities that occur on a system or network, such as logins, logouts, file accesses, policy changes, or security incidents. The security log can provide valuable information for the auditor to assess the security posture, performance, and compliance of the system or network, and to identify any anomalies, vulnerabilities, or breaches that need to be addressed. The other options are not as beneficial as the security log, as they either do not provide enough information for the audit (A and C), or do not reflect the actual state of the system or network (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 405; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 465.
An organization decides to implement a partial Public Key Infrastructure (PKI) with only the servers having digital certificates. What is the security benefit of this implementation?
Clients can authenticate themselves to the servers.
Mutual authentication is available between the clients and servers.
Servers are able to issue digital certificates to the client.
Servers can authenticate themselves to the client.
A Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating, managing, distributing, using, storing, and revoking digital certificates, which are electronic documents that bind a public key to an identity. A digital certificate can be used to authenticate the identity of an entity, such as a person, a device, or a server, that possesses the corresponding private key. An organization can implement a partial PKI with only the servers having digital certificates, which means that only the servers can prove their identity to the clients, but not vice versa. The security benefit of this implementation is that servers can authenticate themselves to the client, which can prevent impersonation, spoofing, or man-in-the-middle attacks by malicious servers. Clients can authenticate themselves to the servers, mutual authentication is available between the clients and servers, and servers are able to issue digital certificates to the client are not the security benefits of this implementation, as they require the clients to have digital certificates as well. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 631.
Refer to the information below to answer the question.
In a Multilevel Security (MLS) system, the following sensitivity labels are used in increasing levels of sensitivity: restricted, confidential, secret, top secret. Table A lists the clearance levels for four users, while Table B lists the security classes of four different files.
In a Bell-LaPadula system, which user has the MOST restrictions when writing data to any of the four files?
User A
User B
User C
User D
In a Bell-LaPadula system, a user has the most restrictions when writing data to any of the four files if they have the lowest clearance level. This is because of the star property (*property) of the Bell-LaPadula model, which states that a subject with a given security clearance may write data to an object if and only if the object’s security level is greater than or equal to the subject’s security level. This rule is also known as the no write-down rule, as it prevents the leakage of information from a higher level to a lower level. In this question, User A has a Restricted clearance, which is the lowest level among the four users. Therefore, User A has the most restrictions when writing data to any of the four files, as they can only write data to File 1, which has the same security level as their clearance. User B, User C, and User D have less restrictions when writing data to any of the four files, as they can write data to more than one file, depending on their clearance levels and the security classes of the files. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 498. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 514.
Which of the following methods provides the MOST protection for user credentials?
Forms-based authentication
Digest authentication
Basic authentication
Self-registration
The method that provides the most protection for user credentials is digest authentication. Digest authentication is a type of authentication that verifies the identity of a user or a device by using a cryptographic hash function to transform the user credentials, such as username and password, into a digest or a hash value, before sending them over a network, such as the internet. Digest authentication can provide more protection for user credentials than basic authentication, which sends the user credentials in plain text, or forms-based authentication, which relies on the security of the web server or the web application. Digest authentication can prevent the interception, disclosure, or modification of the user credentials by third parties, and can also prevent replay attacks by using a nonce or a random value. Self-registration is not a method of authentication, but a process of creating a user account or a profile by providing some personal information, such as name, email, or phone number. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Refer to the information below to answer the question.
During the investigation of a security incident, it is determined that an unauthorized individual accessed a system which hosts a database containing financial information.
If it is discovered that large quantities of information have been copied by the unauthorized individual, what attribute of the data has been compromised?
Availability
Integrity
Accountability
Confidentiality
The attribute of the data that has been compromised, if it is discovered that large quantities of information have been copied by the unauthorized individual, is the confidentiality. The confidentiality is the property or the characteristic of the data that ensures that the data is only accessible or disclosed to the authorized individuals or entities, and that the data is protected from the unauthorized or the malicious access or disclosure. The confidentiality of the data can be compromised when the data is copied, stolen, leaked, or exposed by an unauthorized individual or a malicious actor, such as the one who accessed the system hosting the database. The compromise of the confidentiality of the data can violate the privacy, the rights, or the interests of the data owners, subjects, or users, and can cause damage or harm to the organization’s operations, reputation, or objectives. Availability, integrity, and accountability are not the attributes of the data that have been compromised, if it is discovered that large quantities of information have been copied by the unauthorized individual, as they are related to the accessibility, the accuracy, or the responsibility of the data, not the secrecy or the protection of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 263. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 279.
A large bank deploys hardware tokens to all customers that use their online banking system. The token generates and displays a six digit numeric password every 60 seconds. The customers must log into their bank accounts using this numeric password. This is an example of
asynchronous token.
Single Sign-On (SSO) token.
single factor authentication token.
synchronous token.
A synchronous token is a hardware device that generates and displays a one-time password (OTP) that changes at fixed intervals, usually based on a clock or a counter. The OTP is synchronized with the authentication server, and the user must enter the OTP within a certain time window to log in. This is an example of single factor authentication based on something the user has (the token). The other options are not correct, as they either do not match the description of the token (A and B), or do not specify the type of token ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 331; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 387.
A large university needs to enable student access to university resources from their homes. Which of the following provides the BEST option for low maintenance and ease of deployment?
Provide students with Internet Protocol Security (IPSec) Virtual Private Network (VPN) client software.
Use Secure Sockets Layer (SSL) VPN technology.
Use Secure Shell (SSH) with public/private keys.
Require students to purchase home router capable of VPN.
The best option for low maintenance and ease of deployment to enable student access to university resources from their homes is to use Secure Sockets Layer (SSL) VPN technology. SSL VPN is a type of virtual private network that uses the SSL protocol to provide secure and remote access to the network resources over the internet. SSL VPN does not require the installation or configuration of any special client software or hardware on the student’s device, as it can use the web browser as the client interface. SSL VPN can also support various types of devices, operating systems, and applications, and can provide granular access control and encryption for the network traffic. Providing students with Internet Protocol Security (IPSec) VPN client software, using Secure Shell (SSH) with public/private keys, and requiring students to purchase home router capable of VPN are not the best options for low maintenance and ease of deployment, as they involve more complexity, cost, and compatibility issues for the students and the university. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 507. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 523.
With data labeling, which of the following MUST be the key decision maker?
Information security
Departmental management
Data custodian
Data owner
With data labeling, the data owner must be the key decision maker. The data owner is the person or entity that has the authority and responsibility for the data, including its classification, protection, and usage. The data owner must decide how to label the data according to its sensitivity, criticality, and value, and communicate the labeling scheme to the data custodians and users. The data owner must also review and update the data labels as needed. The other options are not the key decision makers for data labeling, as they either do not have the authority or responsibility for the data (A, B, and C), or do not have the knowledge or interest in the data (B and C). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 63; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 69.
Which of the following is the MOST crucial for a successful audit plan?
Defining the scope of the audit to be performed
Identifying the security controls to be implemented
Working with the system owner on new controls
Acquiring evidence of systems that are not compliant
An audit is an independent and objective examination of an organization’s activities, systems, processes, or controls to evaluate their adequacy, effectiveness, efficiency, and compliance with applicable standards, policies, laws, or regulations. An audit plan is a document that outlines the objectives, scope, methodology, criteria, schedule, and resources of an audit. The most crucial element of a successful audit plan is defining the scope of the audit to be performed, which is the extent and boundaries of the audit, such as the subject matter, the time period, the locations, the departments, the functions, the systems, or the processes to be audited. The scope of the audit determines what will be included or excluded from the audit, and it helps to ensure that the audit objectives are met and the audit resources are used efficiently and effectively. Identifying the security controls to be implemented, working with the system owner on new controls, and acquiring evidence of systems that are not compliant are all important tasks in an audit, but they are not the most crucial for a successful audit plan, as they depend on the scope of the audit to be defined first. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 54. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 69.
Which of the following secure startup mechanisms are PRIMARILY designed to thwart attacks?
Timing
Cold boot
Side channel
Acoustic cryptanalysis
Side channel attacks are a type of attack that exploit the physical characteristics of a system, such as power consumption, electromagnetic radiation, timing, sound, or temperature, to extract sensitive information. Secure startup mechanisms, such as secure boot or trusted boot, are primarily designed to thwart these types of attacks by verifying the integrity and authenticity of the system components before loading them into memory. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Design, p. 201; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Architecture and Engineering, p. 331.
Which of the following describes the concept of a Single Sign -On (SSO) system?
Users are authenticated to one system at a time.
Users are identified to multiple systems with several credentials.
Users are authenticated to multiple systems with one login.
Only one user is using the system at a time.
Single Sign-On (SSO) is a technology that allows users to securely access multiple applications and services using just one set of credentials, such as a username and a password56
With SSO, users do not have to remember and enter multiple passwords for different applications and services, which can improve their convenience and productivity. SSO also enhances security, as users can use stronger passwords, avoid reusing passwords, and comply with password policies more easily. Moreover, SSO reduces the risk of phishing, credential theft, and password fatigue56
SSO is based on the concept of federated identity, which means that the identity of a user is shared and trusted across different systems that have established a trust relationship. SSO uses various protocols and standards, such as SAML, OAuth, OIDC, and Kerberos, to enable the exchange of identity information and authentication tokens between the systems56
Without proper signal protection, embedded systems may be prone to which type of attack?
Brute force
Tampering
Information disclosure
Denial of Service (DoS)
The type of attack that embedded systems may be prone to without proper signal protection is information disclosure. Information disclosure is a type of attack that exposes or reveals sensitive or confidential information to unauthorized parties, such as attackers, competitors, or the public. Information disclosure can occur through various means, such as interception, leakage, or theft of the information. Embedded systems are systems that are integrated into other devices or machines, such as cars, medical devices, or industrial controllers, and perform specific functions or tasks. Embedded systems may communicate with other systems or devices through signals, such as radio frequency, infrared, or sound waves. Without proper signal protection, such as encryption, authentication, or shielding, embedded systems may be vulnerable to information disclosure, as the signals may be captured, analyzed, or modified by attackers, and the information contained in the signals may be compromised. Brute force, tampering, and denial of service are not the types of attack that embedded systems may be prone to without proper signal protection, as they are related to the guessing, alteration, or prevention of the access or functionality of the systems, not the exposure or revelation of the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 311. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 327.
According to best practice, which of the following groups is the MOST effective in performing an information security compliance audit?
In-house security administrators
In-house Network Team
Disaster Recovery (DR) Team
External consultants
According to best practice, the most effective group in performing an information security compliance audit is external consultants. External consultants are independent and objective third parties that can provide unbiased and impartial assessment of the organization’s compliance with the security policies, standards, and regulations. External consultants can also bring expertise, experience, and best practices from other organizations and industries, and offer recommendations for improvement. The other options are not as effective as external consultants, as they either have a conflict of interest or lack of independence (A and B), or do not have the primary role or responsibility of conducting compliance audits ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 240; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 302.
When implementing a secure wireless network, which of the following supports authentication and authorization for individual client endpoints.
Temporal Key Integrity Protocol (TKIP)
Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK)
Wi-Fi Protected Access 2 (WPA2) Enterprise
Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP)
When implementing a secure wireless network, the option that supports authentication and authorization for individual client endpoints is Wi-Fi Protected Access 2 (WPA2) Enterprise. WPA2 is a security protocol that provides encryption and authentication for wireless networks, based on the IEEE 802.11i standard. WPA2 has two modes: Personal and Enterprise. WPA2 Personal uses a Pre-Shared Key (PSK) that is shared among all the devices on the network, and does not require a separate authentication server. WPA2 Enterprise uses an Extensible Authentication Protocol (EAP) that authenticates each device individually, using a username and password or a certificate, and requires a Remote Authentication Dial-In User Service (RADIUS) server or another authentication server. WPA2 Enterprise provides more security and granularity than WPA2 Personal, as it can support different levels of access and permissions for different users or groups, and can prevent unauthorized or compromised devices from joining the network. Temporal Key Integrity Protocol (TKIP), Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK), and Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) are not the options that support authentication and authorization for individual client endpoints, as they are related to the encryption or integrity of the wireless data, not the identity or access of the wireless devices. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 506. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 522.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
ISC 2 Credentials | CISSP Questions Answers | CISSP Test Prep | Certified Information Systems Security Professional (CISSP) Questions PDF | CISSP Online Exam | CISSP Practice Test | CISSP PDF | CISSP Test Questions | CISSP Study Material | CISSP Exam Preparation | CISSP Valid Dumps | CISSP Real Questions | ISC 2 Credentials CISSP Exam Questions
TESTED 21 Nov 2024
Copyright © 2014-2024 CramTick. All Rights Reserved