ISACA Releases New Cyber Audit & Assurance Resources

0

The Information Systems Audit and Control Association (ISACA) has rolled out new audit and assurance resources on cybersecurity control and AI, the ISACA Cybersecurity Audit Program: Based on the NIST Cybersecurity Framework 2.0 and the Artificial Intelligence Audit Toolkit, to better equip auditors keep pace with the changing cyber and emerging technology landscape.

The ISACA Cybersecurity Audit Program: Based on NIST Cybersecurity Framework 2.0 updates ISACA’s 2016 IS Audit/Assurance Cybersecurity Program to include new content that reflects the changes in the NIST CSF 2.0.

It covers the six functions of NIST CSF 2.0 – govern, identify, protect, detect, respond and recover, and delves into categories including cybersecurity supply chain risk management, platform security, adverse event analysis, and incident recovery plan execution.

The audit program enables auditors to verify compliance with the NIST CSF 2.0, assess the effectiveness of security controls, policies, procedures, and programs, communicate control status and cybersecurity preparedness with management and other key stakeholders, and identify areas of current or emerging risk for organisations.

The NIST Cybersecurity Framework 2.0 audit program features improved functionality for the standard Excel spreadsheet, with more columns to track the auditor’s opinion and testing observations, as well as a worksheet with summary charts. It also includes a new Word document format. New recommended request list items were added to each subcategory, with a newly created appendix summarising the request list. Additionally, the audit program now includes an evaluation worksheet that auditors can use to document the evaluation of NIST CSF 2.0 subcategory implementation status.

While there is currently no one standardised framework or methodology for auditing AI, auditors seeking to gain a deeper understanding of AI controls can leverage the Artificial Intelligence Audit Toolkit, a library of AI controls sourced from select control frameworks and law. The library is meant to help auditors better understand how these controls relate to different aspects of the AI lifecycle.

The assessment guide part of Artificial Intelligence Audit Toolkit provides a methodology to evaluate the control, design, and operating effectiveness of AI-enabled systems, tools and processes. It covers controls in a series of control families and categories across various areas, including AI bias mitigation and fairness, AI data privacy and rights, human-AI interaction and experience, and secure systems design and development.

Additionally, it walks through the six dimensions of AI explainability -rationale, responsibility, data, fairness, safety, performance and impact, plus the key elements in the assessment development approach, control synthesis and mapping, and explainability integration. The Excel-based toolkit provides a comprehensive resource to support AI assessment efforts, with spreadsheets that provide guidance related to the AI control assessment pertaining to each explainability dimension.

“The digital trust professionals in ISACA’s global community are working in constantly evolving fields, and ISACA is committed to walking alongside them with the tools, resources and best practices they need to do their jobs effectively,” said ISACA GRC Professional Practices Principal Lisa Cook. “During periods of uncertainty with technology or regulations that are in their nascent stage, such as with AI, it is especially important to ensure the professional community is equipped and supported.”

Share.