Q1CLAUSE 4 · CONTEXT
According to ISO/IEC 42001:2023, when establishing the scope of the AI Management System, the organization MUST consider:
- Only the AI systems developed in-house, excluding third-party AI tools
- Internal and external issues, the needs of interested parties, and the boundaries and applicability of the AIMS
- The financial impact of AI on annual revenue
- Only AI systems classified as high-risk under the EU AI Act
Why B is correct: Clause 4.3 explicitly requires consideration of internal/external issues (4.1), interested parties (4.2), and boundary/applicability when defining the AIMS scope. It does not exclude third-party AI nor restrict to specific risk classifications.
Q2CLAUSE 6 · PLANNING
An AI Impact Assessment under ISO 42001 should primarily evaluate impacts on:
- The organization’s IT infrastructure only
- Shareholder financial returns
- Individuals, groups of individuals, and societies affected by the AI system
- Competitors and market positioning
Why C is correct: Clause 6.1.4 (AI system impact assessment) focuses on individuals, groups, and society. This distinguishes ISO 42001 from typical IT-only risk frameworks — it explicitly requires consideration of broader stakeholder impact.
Q3ANNEX A · CONTROLS
Annex A control A.6 (AI system lifecycle) requires the organization to define processes for:
- Objectives, design, development, validation, deployment, operation, monitoring, and retirement
- Only design and deployment phases
- Hardware procurement only
- Marketing and customer feedback
Why A is correct: Control A.6 covers the full AI system lifecycle, end-to-end. Auditors should look for evidence at every stage, not just initial development or deployment.
Q4SCENARIO · AUDIT
During a Stage 2 audit, you discover that the auditee’s AI fraud detection model has not been re-validated since deployment 18 months ago, despite documented evidence of data drift. The AI policy states models must be re-validated annually. This is BEST classified as:
- Observation
- Opportunity for improvement
- Major nonconformity
- Minor nonconformity
Why C is correct: A documented control (annual re-validation) is not being performed, AND there is evidence of degraded performance (data drift) that could affect AI system effectiveness. Per ISO 19011 + ISO 17021-1, a systemic failure with potential to cause harm typically classifies as a major nonconformity.
Q5SCENARIO · LIFECYCLE
An organization deploys an AI hiring tool. Six months later, audit reveals the tool was never assessed against control A.5 (AI system impact assessment) before deployment. The MOST appropriate auditor finding is:
- No finding — the tool is operational
- Observation — recommend future assessment
- Major nonconformity — A.5 is a fundamental control and was not implemented before deployment
- Minor nonconformity — documentation issue
Why C is correct: Skipping AI impact assessment for a hiring tool — a high-impact use affecting individuals — represents systemic failure of a fundamental control. This is exactly the kind of gap ISO 42001 was designed to catch. Auditor MUST raise this as major.
Q6CLAUSE 9 · EVALUATION
Internal audits under ISO 42001 must be conducted:
- Only when external regulators request
- At planned intervals to determine whether the AIMS conforms to requirements and is effectively implemented
- Only after a major incident
- Once before initial certification, then never again
Why B is correct: Clause 9.2 requires planned internal audits. The standard is explicit: “at planned intervals” — not reactive, not one-time. Auditors verify both conformity and effective implementation.
Q7SCENARIO · DATA
An auditee uses publicly scraped web data to train an AI model. The auditee cannot demonstrate documented data lineage or licensing review. Under control A.7 (Data for AI systems), this is:
- Acceptable — public data has no licensing requirements
- Nonconformity — A.7 requires documented data sources, quality, and lineage; absence of licensing review is a control gap
- Observation only
- Outside ISO 42001 scope
Why B is correct: Public availability does not equal license to use for AI training (see ongoing regulatory cases). A.7 requires documented data sources and quality. An auditee unable to demonstrate lineage or licensing review has a clear control gap.
Q8CLAUSE 5 · LEADERSHIP
Top management’s responsibilities under ISO 42001 Clause 5 include all EXCEPT:
- Establishing the AI policy
- Ensuring resources are available for the AIMS
- Personally writing every AI risk assessment
- Promoting continual improvement
Why C is correct: Top management is responsible for ensuring AI risk assessments are performed and resourced — but they don’t personally author every assessment. This is a common distractor on management system exams.
Q9SCENARIO · SHADOW AI
An auditor finds that 60% of marketing staff use ChatGPT and Copilot daily — none documented in the AI system inventory. The AI policy states all AI systems must be inventoried. The MOST appropriate finding is:
- No finding — these are personal tools
- Major nonconformity — undocumented systemic shadow AI usage indicates control failure across A.4 (resources), A.6 (lifecycle), and A.10 (third-party AI)
- Observation — recommend a survey
- Outside scope of ISO 42001
Why B is correct: Shadow AI used at scale (60%) for business purposes falls within AIMS scope. Multiple controls are simultaneously affected — making this a systemic failure, not isolated. Auditors increasingly see this exact scenario.
Q10CLAUSE 10 · IMPROVEMENT
After identifying a nonconformity, ISO 42001 requires the organization to:
- Immediately terminate the AI system involved
- Issue a public disclosure
- React to control the nonconformity, evaluate the need for action to eliminate causes, and implement actions needed
- Hire external consultants
Why C is correct: Clause 10.2 specifies a 3-step response: react, evaluate, implement. Note: the requirement is causes-elimination, not just cosmetic fixes. Auditors verify actual root-cause analysis was performed.