Audit Technology

Prompt Engineering for Auditors: How to Use AI Safely in Audit Practice [2026]

A practical guide for Chartered Accountants using LLMs like ChatGPT, Claude, and Gemini in audit — safe use cases, dangerous pitfalls, 10 prompt templates, and confidentiality best practices.

C
CORAA Team
24 March 2026 13 min

Prompt Engineering for Auditors: How to Use AI Safely in Audit Practice [2026]

Large language models — ChatGPT, Claude, Gemini, and others — have become part of the working toolkit for professionals across industries. Chartered Accountants are no exception. Whether it is drafting an engagement letter, researching an accounting standard, or understanding a complex regulatory requirement, LLMs can accelerate work that would otherwise consume hours.

But the audit profession operates under constraints that most other professions do not face. Auditors are bound by standards of evidence, documentation, professional scepticism, and confidentiality that make the naive use of LLMs genuinely risky. An engagement partner who drafts an audit opinion using ChatGPT and does not verify the output is not just being careless — they are potentially violating SA 700 and undermining the entire basis of the audit report.

This guide provides a practical framework for using LLMs in audit practice. It distinguishes between safe and dangerous use cases, provides ready-to-use prompt templates, and establishes ground rules that protect both the firm and its clients.


The Fundamental Principle: LLMs Are Research Assistants, Not Audit Tools

This distinction must be understood before any LLM is used in audit work.

An LLM is a research assistant. It can help you find information, draft preliminary documents, organise your thinking, and explain complex topics. It cannot perform audit procedures. It cannot evaluate audit evidence. It cannot exercise professional judgement. It cannot form an audit opinion.

The reason is architectural. LLMs are probabilistic systems. They generate text that is statistically probable given their training data and your prompt. They do not reason from evidence to conclusion in the way that audit standards require. They cannot distinguish between what is supported by evidence and what merely sounds plausible. They may generate outputs that are entirely fabricated but presented with confidence — the hallucination problem.

This means every LLM output must be treated as a draft that requires human verification. No LLM output should enter the audit file without the auditor having independently confirmed its accuracy and relevance. The LLM accelerates the auditor's work; it does not replace the auditor's judgement.


Safe Use Cases: Where LLMs Add Value

The following use cases are appropriate for LLM assistance in audit practice, provided the outputs are reviewed and verified by the auditor before use.

1. Drafting Engagement Letters

Engagement letters follow well-established formats. An LLM can generate a first draft based on the engagement type, entity characteristics, and scope requirements. The auditor then reviews the draft against the actual engagement terms, verifies compliance with SA 210, and adjusts as needed.

The LLM saves drafting time. The auditor's review ensures accuracy.

2. Researching Accounting Standards

Understanding the requirements of a specific Ind AS, SA, or other standard is a common task. LLMs can summarise key requirements, explain the interaction between different paragraphs, and provide a plain-language interpretation of complex standard language.

This is particularly valuable for junior team members who may need help understanding standards they have not previously applied. The LLM provides a starting explanation; the team member then reads the actual standard to verify.

3. Summarising Complex Regulations

When a client operates in a regulated industry, the auditor may need to understand industry-specific regulations — RBI circulars for banking clients, SEBI regulations for listed entities, IRDAI guidelines for insurance companies. LLMs can provide summaries of these regulations, highlighting the provisions most relevant to financial reporting.

Again, the summary is a starting point. The auditor verifies against the actual regulatory text.

4. Generating Checklists

Audit checklists — for planning, risk assessment, substantive procedures, and completion — can be generated by LLMs based on the engagement type and entity characteristics. The LLM produces a comprehensive initial list; the auditor reviews it against the firm's methodology, the specific engagement requirements, and applicable standards.

5. Brainstorming Risk Factors

SA 315 requires the engagement team to discuss the susceptibility of the entity's financial statements to material misstatement. LLMs can generate initial lists of risk factors based on the entity's industry, size, structure, and operating environment. This does not replace the team discussion — it seeds it with a comprehensive starting list that the team can then evaluate, add to, and prioritise.

6. Explaining Complex Concepts to Junior Staff

Senior auditors often spend time explaining technical concepts to team members — the mechanics of expected credit loss calculations, the principles of hedge accounting, the requirements for related party disclosures. LLMs can provide clear, structured explanations that junior staff can study before asking targeted questions of their seniors.

7. Drafting Client Communications

Management letters, control deficiency reports, and other client communications require clear, professional writing. LLMs can generate first drafts based on the findings the auditor wants to communicate. The auditor reviews the tone, accuracy, and completeness before the communication is finalised.

8. Comparative Analysis

When the auditor needs to understand the differences between two standards (e.g., Ind AS 115 vs AS 9, or SA 240 vs the superseded AAS), LLMs can generate structured comparisons that highlight the key differences. The auditor verifies the comparison against the actual standard texts.


Dangerous Use Cases: Where LLMs Must Not Be Used

The following use cases involve professional judgement, evidence evaluation, or opinion formation. LLMs are categorically inappropriate for these tasks.

Generating Audit Opinions

The audit opinion is the product of the auditor's professional judgement, informed by all the evidence gathered during the engagement. An LLM cannot evaluate that evidence. It cannot weigh conflicting indicators. It cannot assess whether a misstatement is material in the context of the financial statements as a whole. Generating an audit opinion with an LLM — even as a "starting draft" — is dangerous because it creates a document that has the form of an opinion without the substance of one.

Creating Financial Statements

Financial statements must be prepared in accordance with the applicable financial reporting framework, based on the entity's actual transactions and balances. An LLM that generates financial statement text is working from probabilities, not from the entity's actual data. It may produce figures that look reasonable but bear no relationship to reality.

Making Materiality Judgements

Materiality is a matter of professional judgement that considers the entity's specific circumstances, the needs of the users of the financial statements, and the nature of the items in question. LLMs cannot assess these factors. They can tell you what a textbook says about materiality — they cannot determine the appropriate materiality level for a specific engagement.

Assessing Going Concern

Going concern assessment requires evaluating management's plans, the entity's financial condition, industry conditions, and other factors specific to the entity. An LLM has no access to this entity-specific information (unless you provide it, which raises confidentiality issues discussed below). Even with the information, the LLM cannot exercise the professional scepticism required by SA 570.

Evaluating Audit Evidence

SA 500 requires the auditor to evaluate the sufficiency and appropriateness of audit evidence. This evaluation requires professional judgement about the relevance and reliability of each piece of evidence, considered in the context of the overall engagement. LLMs cannot perform this evaluation.

Performing Fraud Risk Assessment

SA 240 requires the auditor to maintain professional scepticism and consider the risk of fraud. Fraud risk assessment involves understanding the entity's specific circumstances, evaluating management's integrity, and considering incentives and opportunities for fraud. This is inherently judgemental and entity-specific — not amenable to probabilistic text generation.


10 Prompt Templates for Auditors

The following templates are designed for use with any major LLM. They are structured to produce useful outputs while keeping the auditor in control.

Template 1: Standard Summary

Summarise the key requirements of [SA number / Ind AS number / standard name].
Focus on:
- The standard's objective
- Key definitions
- Principal requirements (not application guidance)
- Documentation requirements
- Effective date and transitional provisions

Format the output as a structured summary with headings.
Do not include application examples unless I ask for them separately.

Template 2: Audit Procedure Generation

List the substantive audit procedures for testing the [assertion: existence / completeness / accuracy / valuation / rights and obligations / presentation] assertion for the [account name] account in a [industry type] entity.

For each procedure:
- State the procedure clearly
- Identify the evidence to be obtained
- Note the relevant auditing standard reference

Present in order from most to least effective.

Template 3: Management Representation Letter Draft

Draft a management representation letter for a [type: statutory audit / tax audit / limited review] of a [entity type: private limited company / LLP / partnership firm] for the financial year ended [date].

Include representations required by:
- SA 580 (Written Representations)
- [Any additional standards relevant to the engagement]
- [Any entity-specific matters, e.g., related party transactions, litigation]

Use formal language appropriate for a management representation letter under Indian auditing standards.

Template 4: Fraud Red Flags

What are the red flags and warning indicators for [type of fraud: revenue manipulation / procurement fraud / payroll fraud / journal entry fraud / inventory manipulation] in a [industry] company operating in India?

Organise the indicators by:
- Pressure/incentive indicators
- Opportunity indicators
- Rationalisation indicators

For each indicator, explain what the auditor should look for in the accounting records and supporting documentation.

Template 5: Standards Comparison

Compare [Standard 1, e.g., Ind AS 116] with [Standard 2, e.g., AS 19] regarding [specific topic, e.g., lease classification and measurement].

Present the comparison as a table with the following columns:
- Topic/Requirement
- [Standard 1] treatment
- [Standard 2] treatment
- Key differences and their impact on financial statements

Focus on differences that are material to financial reporting rather than minor presentational differences.

Template 6: Industry Risk Identification

What are the key audit risks and financial reporting considerations for a [industry] company in India?

Cover:
- Industry-specific revenue recognition issues
- Common estimation uncertainties
- Regulatory compliance requirements affecting financial reporting
- Typical related party structures and risks
- Going concern factors specific to this industry
- Common control weaknesses in this industry

Base your response on publicly known industry characteristics. Do not make assumptions about any specific entity.

Template 7: Internal Control Evaluation

For the [business process: procure-to-pay / order-to-cash / payroll / treasury / inventory management] cycle in a [entity type and size]:

1. List the key controls that should exist at each stage of the process
2. For each control, identify:
   - The financial statement assertion it addresses
   - How the control should operate (manual vs automated, frequency, who performs it)
   - What could go wrong if the control is absent or ineffective
3. Suggest test of controls procedures for each key control

Present in a table format.

Template 8: Accounting Treatment Research

What is the correct accounting treatment under [Ind AS / Indian GAAP] for [describe the transaction or arrangement in general terms without using client-specific information]?

Address:
- The applicable standard(s)
- Recognition criteria
- Measurement (initial and subsequent)
- Presentation and disclosure requirements
- Common practical issues and judgements involved

If the treatment differs between Ind AS and Indian GAAP, highlight the differences.

Template 9: Analytical Procedures Expectation

I am performing analytical procedures on the [financial statement area] of a [industry] company. The following general trends are observable:
[List general, non-confidential trends, e.g., "revenue increased by 15%," "gross margin decreased by 3 percentage points"]

What are the plausible explanations for these trends, considering:
- Industry conditions
- Economic factors
- Common business drivers

For each explanation, suggest what corroborative evidence the auditor should seek.

Note: I have not provided any client-identifying information. Respond based on general industry knowledge.

Template 10: Disclosure Checklist

Generate a disclosure checklist for [Ind AS / AS / Companies Act 2013 Schedule III] applicable to a [entity type] for the financial year ended [date].

Focus on disclosures related to:
- [Specific area, e.g., financial instruments, related parties, segment reporting]

For each disclosure requirement:
- State the requirement and its source (standard paragraph reference)
- Indicate whether it is mandatory or conditional
- Note any measurement or quantitative requirements

The Confidentiality Rule: Non-Negotiable

This section is the most important in this guide.

Never put client-specific data into a public LLM.

This includes:

  • Client names, entity names, or any identifying information
  • Financial figures from the client's records
  • Details of transactions, balances, or arrangements
  • Names of client personnel
  • Details of audit findings
  • Working paper content
  • Draft financial statement text
  • Details of legal matters, litigation, or regulatory proceedings

The reason is straightforward: data entered into public LLMs may be used for model training, may be stored on external servers, and may be accessible to the LLM provider's personnel. This violates the auditor's duty of confidentiality under the Code of Ethics and may violate data protection regulations.

If your firm uses an LLM, it should be through an enterprise deployment that provides:

  • Data isolation: Your inputs are not used for model training
  • Encryption: Data in transit and at rest is encrypted
  • Access controls: Only authorised personnel can access the system
  • Audit trail: Usage is logged and reviewable
  • Data residency: Data is stored in a jurisdiction consistent with your obligations

If you do not have an enterprise deployment, use LLMs only for generic, non-entity-specific queries. The prompt templates above are designed to be usable without any client-specific information.


Best Practices for LLM Use in Audit

1. Always Verify

Every LLM output must be verified against authoritative sources before it is used. If the LLM summarises an accounting standard, read the standard. If it generates a checklist, compare it to your firm's methodology. If it drafts a letter, review every statement for accuracy.

LLMs are confident and articulate — they can be wrong with the same confidence and articulation that they are right. Professional scepticism applies to AI outputs just as it applies to management representations.

2. Document AI Assistance

If you used an LLM to assist with any aspect of the engagement, document that fact. Note what the LLM was used for, what output it produced, and how the output was verified. This is consistent with the transparency requirements of SA 230 and demonstrates that the auditor maintained control of the process.

Your quality management framework should include policies on documenting AI usage in engagements.

3. Maintain Professional Scepticism

An LLM that tells you "the correct accounting treatment is X" is not an authoritative source. It is a statistical prediction of what text is likely to follow your prompt. Treat it as you would a suggestion from a junior team member — potentially useful, but requiring verification.

4. Use Specific, Structured Prompts

Vague prompts produce vague outputs. The more specific your prompt, the more useful the output.

Weak prompt: "What should I audit in a bank?"
Strong prompt: "List the substantive audit procedures for testing the valuation assertion for the loan loss provision in a scheduled commercial bank regulated by RBI, applying Ind AS 109 expected credit loss methodology."

5. Iterate and Refine

If the first output is not quite right, refine the prompt rather than accepting a suboptimal result. Add constraints, specify the format, ask for more detail in specific areas, or ask the LLM to reconsider a particular point.

6. Do Not Rely on LLM Legal or Regulatory Interpretations

LLMs may provide plausible interpretations of legal or regulatory requirements that are incorrect or incomplete. For legal matters, consult legal counsel. For regulatory matters, consult the regulatory text and, if necessary, the relevant regulatory authority.

7. Keep Up With Model Updates

LLMs are updated regularly. The behaviour of ChatGPT in March 2026 may differ from its behaviour in December 2025. Be aware that model updates can affect output quality and consistency. A prompt that worked well with one model version may produce different results with the next.

8. Establish Firm-Level Policies

Individual auditors should not be making ad hoc decisions about LLM use. The firm should establish clear policies covering:

  • Approved LLM platforms (enterprise vs public)
  • Approved use cases (safe vs prohibited)
  • Confidentiality requirements
  • Documentation requirements
  • Quality control procedures for LLM-assisted work
  • Training requirements for team members

The Relationship Between LLMs and Purpose-Built Audit AI

It is important to distinguish between general-purpose LLMs and purpose-built, deterministic audit AI. They serve fundamentally different functions.

General-purpose LLMs (ChatGPT, Claude, Gemini) are research and drafting tools. They help with knowledge work — understanding standards, drafting documents, generating ideas. They do not process audit data or perform audit procedures.

Purpose-built audit AI (such as CORAA's AI agents for ledger scrutiny, reconciliation, and vouching) is designed to perform specific audit procedures on actual engagement data. These systems are deterministic, reproducible, and produce complete audit trails. They are audit tools, not research assistants.

The two are complementary, not competitive. Use LLMs for research and drafting. Use purpose-built audit AI for substantive procedures and data analysis. Maintain professional judgement as the governing layer over both.


Moving Forward Responsibly

LLMs are powerful tools that can make audit practice more efficient and more informed. They can reduce the time spent on routine drafting and research, freeing auditors to focus on judgement, investigation, and client interaction.

But they must be used within boundaries. Those boundaries are defined by auditing standards, professional ethics, confidentiality obligations, and the fundamental requirement that the auditor — not the AI — is responsible for the audit.

Use LLMs as research assistants. Verify everything they produce. Never put client data into public systems. Document your usage. Maintain scepticism. And remember that the value of a Chartered Accountant is not in generating text — it is in exercising professional judgement that no AI, however sophisticated, can replicate.

Free newsletter

Get weekly audit insights

Practical guides on audit automation, SQM1 compliance, and Ind AS procedures — delivered to 2,000+ CA professionals every Friday.

No spam. Unsubscribe any time.

Topics

prompt engineering auditorsChatGPT for CA auditLLM audit practice guideAI safety audit professionprompt templates chartered accountants
Built for India · DPDPA compliant

Ready to automate your audit work?

See how Coraa reduces audit engagement time by 60% — from ledger scrutiny to working papers, all from one Tally import.