AI at work: quick rules for privacy, accuracy, and “human in the loop” checks

Introduction
AI tools are showing up everywhere at work. People use them to draft emails, summarise meetings, create first drafts of documents, write formulas, generate ideas, and tidy up messy notes. Used well, they can save time and help you think more clearly.
Used badly, they can cause real problems: private data gets shared in the wrong place, confident-sounding answers turn out to be wrong, and decisions get made with nobody taking responsibility.
This article is a practical guide to using AI safely at work. It is written for everyday users, not developers. You will get simple, quick rules you can apply straight away, plus a few deeper checks you can use when the stakes are higher.
The focus is on three things:
- Privacy: protecting personal and confidential information
- Accuracy: avoiding mistakes, made-up facts, and sloppy outputs
- Human in the loop: making sure a real person checks, approves, and stays accountable
Many organisations already have guidance you must follow, especially around data protection and security. In the UK, the ICO’s guidance on AI and data protection is a key reference point, including the need for proper governance, fairness, and meaningful human oversight in certain situations.
Security teams also warn that AI can introduce new risks if it is used without care, so it is worth treating AI as a workplace tool that needs sensible controls.
A quick reality check: what AI is (and is not)
Most everyday “AI assistant” tools are built on large language models (LLMs). They predict the next word based on patterns from training data. That means:
- They can sound fluent even when they are wrong.
- They may invent details that look convincing.
- They do not “know” your organisation, policies, or facts unless you provide them (or unless they are connected to your systems in a controlled way).
- They can reflect bias or gaps in data.
- They can misunderstand your request if it is vague.
So the safest mindset is:
Treat AI output as a draft from a fast assistant, not as a final answer from an expert.
The “traffic light” rule for AI at work
Before you paste anything into an AI tool, do a fast traffic light check:
Green: usually OK
- Public information (already on your website)
- Generic templates (meeting agenda, email structure)
- Rewriting your own text to be clearer
- Brainstorming non-sensitive ideas
Amber: proceed carefully
- Internal process notes
- Draft policies or training material that might include internal details
- Summaries of internal meetings (without names or sensitive topics)
- Anything that could cause embarrassment if shared
Red: do not enter into an AI tool unless your organisation has explicitly approved it for this use
- Personal data (names + contact details, HR info, health data, performance info)
- Client confidential information
- Financial details (bank info, invoices with personal data)
- Passwords, API keys, security details
- Legal advice drafts tied to real cases
- Anything “high impact” (hiring, firing, disciplinary action, credit decisions)
This aligns with common UK guidance that data protection and governance still apply even if a tool feels informal.
Part 1: Quick rules for privacy
Privacy mistakes are often simple: copying and pasting the wrong thing into the wrong place. The fix is mostly good habits and clear boundaries.
Rule 1: Use approved tools and accounts only
If your organisation provides an approved AI tool (or a specific account for AI), use that. Do not use personal accounts for work data.
A useful baseline is government guidance that encourages civil servants to be careful with generative AI, use it in line with organisational rules, and treat it as a tool that needs judgement.
If your organisation has no guidance, your safest option is to keep AI use to Green tasks until your leadership sets rules.
Rule 2: Share less, not more
AI prompts often contain far more information than needed. You can usually get a great answer with less detail.
Instead of:
“Write a response to John Smith at ABC Charity who complained about the invoice number 48372 and said the delivery was late by 8 days…”
Try:
“Write a polite response to a customer complaint about a late delivery and an invoice query. Keep it professional and propose next steps.”
If you must include context, remove identifiers:
- Replace names with roles (“the customer”, “the supplier”)
- Remove phone numbers, addresses, invoice numbers
- Remove dates if they are not needed
- Summarise sensitive details rather than pasting raw text
Rule 3: Do not paste personal data unless you have a clear lawful basis and an approved process
In the UK, personal data processing must follow UK GDPR principles. The ICO’s guidance makes clear that AI systems still need proper accountability and governance.
You do not need to become a lawyer to follow a sensible rule:
- If it relates to a real person and it is not public, treat it as Red unless you are sure it is permitted.
Rule 4: Watch out for “hidden” sensitive data
Even if you do not paste a name, you can still expose someone by including a unique combination:
- job title + team + unusual situation
- location + date + incident details
- “the only person in our office who…”
If a colleague could be recognised from your prompt, treat it as personal data.
Rule 5: Avoid uploading documents unless you know exactly where they go and how they are used
Many AI tools allow file upload (PDFs, spreadsheets, emails). That can be useful, but it increases privacy risk. Only upload if:
- the tool is approved for that use
- the document is not sensitive, or it is properly anonymised
- you understand retention (does it store your file, and for how long?)
If you cannot answer those questions, do not upload the file.
Rule 6: Keep a basic record for higher-risk use
For routine Green tasks, you do not need a paper trail. For Amber tasks, it helps to keep a simple note:
- what you used AI for
- what data you included (high level)
- who checked it
This supports accountability and governance, which is a recurring theme in AI guidance.
Part 2: Quick rules for accuracy
Accuracy is the biggest day-to-day risk for most teams. The tricky part is that AI mistakes are not always obvious. The output can look neat, logical, and completely wrong.
Rule 7: Assume the first answer is a draft
A strong habit is to treat the first output as “Version 0.7”. You then improve it with checks.
Ask yourself:
- What could be wrong here?
- What would I need to prove this is correct?
- What is missing?
Rule 8: Make the AI show its working (in plain English)
Instead of asking:
“What should we do?”
Ask:
“Give me 3 options. For each option, list assumptions, risks, and what I should verify.”
This forces structure and makes it easier to spot nonsense.
Rule 9: Always verify facts, figures, and claims
AI can hallucinate. That includes:
- invented policy references
- made-up dates
- wrong product features
- incorrect legal wording
- inaccurate statistics
A simple rule:
- If it matters, verify it using a trusted source.
For workplace AI risk management, the idea of testing, evaluation, and ongoing monitoring shows up in major frameworks like NIST’s AI RMF.
Rule 10: Use a “two-source” check for anything important
For medium or high stakes work:
- Ask AI for an answer.
- Check it against at least one trusted source (policy, official documentation, data, a subject expert).
- If you cannot verify it, do not use it as a final output.
This is especially important for anything customer-facing, legal, financial, or HR-related.
Rule 11: Be careful with summaries
Summaries are one of the most popular AI uses, and one of the easiest ways to get subtle errors.
Common summary failures:
- missing key decisions
- mixing up who agreed what
- changing the tone (making someone sound harsher or more confident)
- removing important nuance
A safe approach:
- Use AI to produce a summary.
- Then do a quick “compare to source” scan.
- Add a line: “Please confirm I captured this correctly.”
Rule 12: Watch out for spreadsheet and maths errors
Some models are better than others at maths, but errors still happen. Use AI to:
- suggest an approach
- explain a formula
- outline steps
But for final numbers:
- check with the spreadsheet itself
- use a calculator or a second method
- sanity check (does this number make sense?)
Rule 13: Treat generated code, formulas, and scripts as untrusted until tested
If AI gives you:
- Excel formulas
- Power Automate expressions
- PowerShell scripts
- SQL queries
Test them in a safe environment first. Start with a small dataset. Confirm the result.
Part 3: “Human in the loop” checks (the part that keeps you safe)
“Human in the loop” means a person is actively involved in reviewing and approving AI output before it is used. It is not just “someone could check if they wanted to”. It is a real step in the process.
This matters because:
- AI tools can be wrong in ways you do not notice
- AI outputs can affect people (fairness, reputation, opportunity)
- You need accountability: someone must own the outcome
The ICO has specific guidance on meaningful human oversight in the context of individual rights and automated decision-making.
The three levels of human checking
Level 1: Light review (low risk)
Use for Green tasks:
- you read it
- you correct obvious issues
- you confirm it matches your intent
Examples:
- drafting a generic email
- rewriting text for clarity
- brainstorming agenda items
Level 2: Structured review (medium risk)
Use for Amber tasks:
- you check against source materials
- you verify key facts
- you adjust tone and wording
- you ensure no sensitive data leaks
Examples:
- summarising an internal meeting
- drafting an internal policy update
- creating a customer FAQ draft
Level 3: Formal approval (high risk)
Use for Red tasks or high impact outputs:
- a named person reviews and signs off
- you keep a record of what was reviewed
- you test or validate results
- you have an escalation route if something looks wrong
Examples:
- HR decisions and communications about individuals
- customer complaint responses involving compensation
- legal or regulatory wording
- financial reporting narratives
- anything that could materially affect someone’s job, pay, or rights
A simple “human in the loop” checklist
Before you send, publish, or rely on AI output, ask:
- Data: Did I include anything private or confidential that should not be here?
- Truth: What are the key facts and have I verified them?
- Impact: Who could be affected if this is wrong?
- Bias: Does it treat people fairly, or does it make assumptions?
- Tone: Would I be happy if this was forwarded to senior leadership or a client?
- Ownership: Am I comfortable putting my name to this?
If you cannot confidently answer these, pause and escalate.
Practical examples: using AI safely in everyday work
Here are a few common scenarios and what “safe use” looks like.
Scenario A: Drafting an email to a client
Good use: Ask AI for structure, tone, and a clear call to action.
Safer prompt:
“Draft a polite email chasing an overdue response. Keep it friendly, professional, and short. Include two options for next steps.”
Human check:
- confirm it matches the situation
- remove anything that could sound passive-aggressive
- ensure you are not committing to something you cannot deliver
Scenario B: Summarising meeting notes for the team
Good use: Turn rough notes into a clear summary with actions.
Safer approach: Remove names and sensitive topics, or summarise them yourself.
Human check:
- confirm decisions and owners are correct
- confirm dates and deadlines
- ensure you did not misrepresent anyone
Scenario C: Creating a draft process or SOP
Good use: Generate a first draft structure and headings.
Human check:
- adapt it to your real tools and steps
- ensure it matches your policies
- remove generic fluff and add specifics
Scenario D: Analysing a dataset for trends
Good use: Ask AI for ideas about what to look for, or how to present findings.
Human check:
- validate calculations in Excel/Power BI
- confirm chart choices are sensible
- avoid misleading claims
The security angle: don’t forget attackers exist
AI tools can also be used as a route into your organisation’s systems and people.
Two simple, practical points from cyber security guidance:
- Treat AI outputs with caution, especially if they include links, files, or instructions.
- Be careful about prompts and content that come from outside the organisation (for example, text pasted from an email), because it might be designed to manipulate the tool or the user.
What this means in plain terms:
- Do not blindly follow AI instructions.
- Do not paste unknown content into sensitive workflows.
- Do not run scripts or macros suggested by AI unless you understand and test them.
A one-page workplace AI policy (simple starter)
If your organisation is still figuring this out, this is a sensible starting point you can adapt:
Allowed uses (examples)
- Drafting and editing non-sensitive text
- Creating templates, checklists, and training outlines
- Summarising non-sensitive information with a human review
- Brainstorming ideas and options
Not allowed (examples)
- Entering personal data or client confidential information into unapproved tools
- Fully automated decisions about people (hiring, firing, discipline)
- Publishing AI content without human review
- Using AI outputs as “facts” without verification
Required checks
- Verify facts for anything external-facing
- Use a second source for important claims
- Keep a record for high-risk outputs
- Named approver for high-impact decisions
This kind of approach fits with UK ethical and governance thinking: practical actions, not just principles.
The most useful habit: ask better questions
A lot of AI “risk” comes from vague prompts. Better prompts make it easier to check the result.
Try these add-ons:
- “List your assumptions.”
- “What could be wrong with this?”
- “Give me a checklist to verify this.”
- “Provide options with pros and cons.”
- “Rewrite this for a UK audience in plain English.”
- “Highlight any parts that require a subject expert.”
These prompts naturally support accuracy and human oversight.
When you should not use AI
Even with good checks, there are times to avoid it:
- when you do not have permission to share the data
- when the output could have a major impact on someone’s rights or job
- when you cannot verify the result
- when the tool is not approved for your organisation
- when you are under pressure and likely to skip review steps
If you are tempted to use AI because you are rushed, that is exactly when mistakes happen.
Learn AI in a practical, workplace-friendly way
If you want your team to use AI confidently and safely, structured training helps. It gives people clear boundaries, real examples, and the habits that prevent expensive mistakes.
You can view ExperTrain’s Artificial Intelligence (AI) courses here:
Artifical Intelligence Courses




