Collective

We’re a worker-owned agency that designs, builds and supports websites for organisations we believe in.

AI ethics and responsibility

We’ve seen a sharp increase in the use of AI, as well as in the commentary surrounding it. Articles tend to draw us towards opinions that are either excitedly optimistic about the near-endless possibilities of AI or fearful of the dystopian futures that have been a staple of science fiction for years.  

We’ve put together a small working group to investigate how AI is likely to impact our members and our business. It is also reporting on due diligence and will help us stay informed and respond more effectively to client requests. 

This blog article shares some of our learnings from this work. It focuses on “responsible AI,” emphasising that we should not blindly use AI without first considering the implications of our actions, particularly on generative AI (which encompasses text, image, and video generation).

The more we discuss these topics in our daily lives, the more we will all make informed, responsible decisions. Changes may be small at first, but if we all make them together, we can guide things to a better place.

This article isn’t meant to dissuade people from using AI. It is intended to help readers use AI responsibly by providing the information they need to ask the right questions before submitting queries.

Bias and Fairness

Consider how an AI's responses may, intentionally or unintentionally, push us down a particular path of thinking. The direction we are pushed is primarily determined by how the LLM we are using is trained.

For example, many of the AIs we regularly use are trained on source material for a specific region, whether that be Claude and a Western-centric source, or Deepseek and its Eastern-centric source.

Imagine if you were put into a large city library, read everything there, and very little else.  Those books would likely have been written mostly by white, well-educated, middle- to upper-class men. These books inevitably reflect the biases and opinions of those writers.

We therefore get  bias based on this learning model, which is starting to have real-world repercussions:


Privacy

The privacy of our data is important, and a loss of privacy is one of the biggest fears regarding AI.

What does it know? How much can people find out? Is my secure information still secure?

  • The right to be forgotten - (part of GDPR compliance). This is not possible with AI (at least, no one seems to have found a way to do so).  This is because AI models need to maintain training data to function. Removing anything would be akin to removing childhood memories from humans. Without them, we would lose a sense of self, and any decisions would be more random and dysfunctional.
  • Prompt injection attacks - Some hackers spend time crafting AI prompts to trick an AI into revealing confidential information.
  • You train future models - Depending on the tool you use (and if it is free/paid), your requests may be training the models’ future versions unless you explicitly decline. This could mean that ideas shared could be reworded and shared with others.

Content & Copyright

Things become complicated when we start discussing IP and ownership.

  • IP infringement - When each major version of an AI model is generated, the training may have included copyrighted material, which could lead to it outputting content you do not have the right to reproduce. Whether this is via an artist’s style being used, or a specific IP, such as James Bond.
  • Content ownership - Similar to the above, but if you submit AI content as your own, you cannot be sure of the ownership of that content. Therefore, it would be hard to prevent others from copying/pasting it into their own published works.

Reliability

Generative AI is a predictive model; it doesn’t “know” things as such. Instead, it uses all its training data and the information available to predict the answer, presenting the answer in text, images, or even video. This presents several issues:

  • Data Poisoning - This is when an attacker injects harmful or misleading content into an AI's training data.  This could be through website/API content hacking, or with Nightshade. For example, an attacker could add invisible pixels which caused AI to miscategorise content (eg. dogs as cats).
  • AI Hallucination - This is where the AI can return false information as fact. This happens because LLMs are predictive AIs; they try to predict the correct output rather than being a database of information.
  • Gullible AI - If you use leading questions, it will often believe you and tailor its answers to show that you were correct, rather than to correct you if you are wrong.

Economic, Environmental & Social Impact

Outside of our business, AI has far-reaching effects, whether in language, arts, or the environment.

  • Acceleration in the loss of minority languages - as mentioned above, these are all built on Sinophone (Chinese) or Anglophone (English) models, and, as such, will respond with similar language and mannerisms.  This could result in the world having just a few standardised “AI Context Languages”, reducing the use of others. However, there is a case for AI translation tools increasing the use of minority languages.
  • Reduction in the arts - there have already been several cases in which businesses have used AI to replicate past material created by artists without any recognition until they are caught. At the moment, there is no law preventing this to a reasonable degree, so only market pressure can have an effect.  One example of this is with the games production company Hasbro.
  • Carbon damage - There is a lot of discussion at the moment about the environmental impact of the vast data centres that house AI systems. Examples are easy to find, such as Elon Musk's xAI data centre or Google's inability to keep its green efforts. Every request has a footprint, which, sadly, cannot yet be accurately measured due to the lack of information shared by providers. For now, just consider each request before hitting send. Also note that for every major version of an AI, there is a huge carbon cost to retrain it.
  • Water usage - Along with the carbon, we have a huge amount of water required, largely for cooling these huge data centres. For example, the GDSA reports that “AI is predicted to lead to an increase in global water usage from 1.1bn to 6.6bn”.

Accountability & Transparency

If you use the AI and it makes a mistake, who is responsible?

  • Lack of review - Without a proper review process in place, anyone in an organisation could use AI in their daily work unchecked. Also, without guidance, mistakes could be made.
  • Legal restrictions - At present, these mainly focus on developing AI systems. However, you should still consider whether this would affect EULAs with clients and the confidentiality of what you input into the AI. These are two useful resources which cover it, the EU AI Act and Advice for agencies.

External risks

AI risks can come from outside your organisation, so being aware of this is also useful.

  • Malicious AI attacks -  Where someone has built an AI to hack into a data source, cause a DDoS attack, or trigger other issues to damage a site (or an individual's) public image.
  • Lazy AI “attacks” - AI bots are being developed carelessly. This can result in websites being taken down due to aggressive page scraping, leading to them being blocked by firewalls.

 

7 ways you can work with AI better

Awareness 

Reading articles like this one should help you identify potential pitfalls and risks.
Employ the right AI: There are many options out there, with varying levels of data protection and ethical considerations. Claude appears to be well-positioned between power and ethics, as shown in its constitution, which uses materials such as the “UN declaration of human rights” as part of the model's training.

AI as an assistant

AI can be great for generating ideas or for initial research directions. Requests to the AI can give links to sources (which can help it find hallucinations), then do your own reading, having been directed towards useful content. Have a policy: This will help guide what your team can (and cannot) use AI for. There is a nice AI policy on the BBC website.

Context is key

As part of a policy, you may include a snippet of text at the start to give the AI context. This could state your ethical standpoint, aligning the output to your expectations.

Keep learning

Read articles, watch videos, and if you know others who are using AI, talk with them, sharing learnings and ideas.

Think

Don’t just post things to AI that you could do yourself quickly; the carbon footprint of AI is more than your own, especially if you have to send several adjustments to get the right answer.

We hope this article has helped you get started thinking about how you might responsibly use AI in both your professional and personal lives.  But as always, if you have any questions, we are happy to talk with all our clients, whether past, present, or future.

 

References and further reading

AI and Cyber Security
Hasbro AI art article
BBC AI Policy
The EU AI Act
Advice for agencies
The 4 types of generative AI


AI Disclaimer: This article was written by Dan and edited by Tim. However, AI has been utilised to check grammar, assist with fact-checking, and highlight any potential areas I may have overlooked and wish to research.

 

Meet the Authors
Back to blog