AI, chatbots and your organisation: a practical introduction

Over the past year or so, “AI” has gone from being a buzzword to something many organisations feel they ought to be using. We’re increasingly being asked about adding AI chatbots to websites, particularly as part of website design and development projects for not‑for‑profit organisations, often with very little clarity about what they do, what they cost, or what the risks might be.

This post is meant as a helpful plain‑English guide, not a sales pitch. It’s written especially for small not‑for‑profit organisations, many of which work with people in challenging or vulnerable situations and rely heavily on their websites as a primary point of contact.

AI tools can be genuinely useful in the right circumstances. But they can also introduce new risks if they’re adopted too quickly or without clear boundaries. Our goal here is to help you understand the landscape well enough to make calm, informed choices – including deciding not to use AI where that’s the better option.

What people usually mean by “AI” on a website

When people talk about adding “AI” to a website, they are almost always talking about an AI chatbot powered by a Large Language Model.

This is very different from traditional website functionality, which follows fixed rules written by a developer. AI chat systems don’t “know” things in the human sense – they generate text based on patterns learned from very large amounts of data.

That difference has big implications for cost, reliability, and risk, particularly on public‑facing not‑for‑profit websites.

What is a Large Language Model (LLM)?

An LLM (Large Language Model) is the technology behind tools like ChatGPT, Copilot, Claude, and others.

In simple terms:

  • An LLM has been trained on a huge amount of text
  • It predicts the next most likely word based on what you ask it
  • It is very good at producing fluent, human‑like text
  • And, it can sound confident even when it is wrong

LLMs don’t understand context, intent, or consequences in the way a human does. They also don’t automatically follow your organisation’s values or safeguarding policies unless carefully constrained.

What is an AI chatbot?

An AI chatbot is usually an LLM that has been:

  • Given instructions about how to behave
  • Given information about your organisation — for example from your website content or documents you provide
  • Set up to answer questions in a conversational way

A simple chatbot might answer general questions like opening hours or how to make a referral. More complex chatbots might attempt to give guidance, complete forms, or triage support requests.

As a general rule, the more responsibility you give a chatbot on your website, the more careful you need to be.

What is an “agent”?

You may hear people talk about AI “agents”. This can sound impressive, and it can be, but it also raises the stakes.

An agent is an AI system that can take actions, not just produce text. For example, an agent might:

  • Look things up in your document library
  • Create or update records
  • Send emails
  • Trigger workflows

Agents can save time in controlled environments, but they also increase the impact of mistakes. For organisations supporting vulnerable people, this type of automation needs very strong safeguards, and in many cases, might not be appropriate at all.

What is MCP?

MCP stands for Model Context Protocol. It’s similar to an API, a way for systems to talk to each other. The difference is that MCP is designed specifically for AI tools, helping define what information they can access and what actions they’re allowed to take.

In practical terms, MCP makes it easier for AI systems to connect to your internal data or services in a more structured way.

That can be helpful, but it also means decisions about access, permissions, and oversight become even more important.

A simple example: low risk vs higher risk

Lower‑risk use

A chatbot that:

  • Clearly states it is not a human
  • Only answers basic, factual questions
  • Uses pre‑approved content
  • Does not give advice
  • Does not collect personal information

Example:

“Here is a summary of our services and how to contact us.”

Higher‑risk use

A chatbot that:

  • Interprets personal situations
  • Gives advice or reassurance
  • Handles sensitive information
  • Appears caring or authoritative
  • Is used by people in distress

Example:

“Based on what you’ve said, you should try this next…”

For organisations working with vulnerable people, this second category introduces real safeguarding, ethical, and legal concerns.

Common risks to be aware of

Incorrect or misleading answers

AI systems can “hallucinate” – confidently giving information that is wrong, out of date, or inappropriate.

False reassurance

A chatbot can unintentionally downplay a serious issue or give the impression that meaningful support has been provided when it hasn’t.

Data protection concerns

People often share far more information than expected, including highly sensitive personal data.

Safeguarding and duty of care

Many AI tools include safety features and guardrails designed to reduce harmful or inappropriate responses. However, they still don’t understand safeguarding in the human sense and cannot replace trained staff, professional judgement, or established safeguarding processes.

Cost creep

AI chatbots are designed to feel conversational. Once someone feels they’re “in a conversation”, they’re often inclined to keep responding. Each exchange has a cost, and over time this can quietly use up your usage allowance or increase monthly costs, especially if many people are using the chatbot regularly.

When AI can make sense

AI tends to work best when it is:

  • Used internally to support staff rather than the public
    For example, tools like Microsoft’s Copilot can help staff draft emails, summarise documents, or pull key points from meetings in Word, Outlook, and Teams.
  • Helping with drafting, summarising, or organising information
    These kinds of tasks are usually lower risk, particularly when outputs are reviewed by a human before being shared or acted on.
  • Kept firmly in low‑risk, informational roles
    AI works best when it supports clarity and efficiency, rather than interpreting situations or giving advice.
  • Optional, with clear non‑AI alternatives available
    People should always be able to access information or support without needing to use an AI tool.

In many situations, improving the clarity of your website, simplifying contact pathways, or making information easier to find will deliver more value, and less risk, than adding a chatbot.

Questions worth asking before you adopt AI

Before introducing any AI system, it’s worth pausing to ask:

  • What problem are we actually trying to solve?
  • What happens if the AI gets this wrong?
  • Who is accountable for the output?
  • What data might people share without realising the risks?
  • How does this fit with our safeguarding policies?
  • If we decide to stop using it, how easy is that?

If these questions feel uncomfortable or hard to answer, that’s often a sign it’s worth slowing down.

AI isn’t something organisations have to rush into

AI is just another set of tools. Like any tool, it’s useful in some situations and a poor fit in others.

For not‑for‑profit organisations – especially those supporting vulnerable communities – being cautious doesn’t mean being behind the times. It means being thoughtful about responsibilities, values, and the people who rely on your services and your website.

Sometimes the most responsible decision is to keep things simple. And that’s perfectly okay.

Posted in Tech, AI, Not-for-profit