We’ve been aware of and experimenting with so-called ‘generative AI’ tools such as ChatGPT since early 2023, and have decided that although there are many valid uses for such tools, we will limit how we adopt them in our own business, and for our clients.
The most important thing to understand about our approach to AI is that there is always a human being in the loop between AI tools and a finished product or output. These tools are often very confidently incorrect, and therefore shouldn’t be trusted to unilaterally make decisions or take actions that would impact our clients, or their customers.
We also believe that technology should make life easier by doing what computers do best, not by working to beat human beings at what we derive joy and meaning from doing.
You know what the biggest problem with pushing all-things-AI is? Wrong direction.
— Joanna Maciejewska—Snakebitten is on preorder now! (@AuthorJMac) March 29, 2024
I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
The kinds of things we don’t use ‘AI’ for
- Coding – code written by AI is often clunky and doesn’t consider the wider context of the application. It can be difficult to maintain. All our coding is done by developers.
- Creating complete designs or artwork – designs created by AI often have a telltale style that gives them away and can make them appear low-effort. They can also contain small mistakes and details that don’t make sense. All our artwork is created by a designer, taking into account the full use case and audience. Any pieces featuring inspiration from AI sources will have been checked, modified or augmented by humans.
- Writing content – The accuracy of content written by AI is spotty, which is why OpenAI prominently states “ChatGPT can make mistakes”. The writing style has also become identifiable, making the content less impactful to some audiences.
Some of the things we might use ‘AI’ for
- Computational tasks – things that computers are already great at and used extensively for. AI tools can provide a novel or more rapid interface. Tasks like translating data, organising information or identifying patterns might be performed by AI under human supervision and checks.
- Troubleshooting – aka rubber ducking. If our team is having trouble solving a problem, we might throw it into an AI tool and see what the recommendations are. Occasionally, it finds a potential avenue we overlooked. Humans then finish the process.
- Challenge our internal perceptions – we love to provide an outside perspective for our clients. When we can’t see the wood for the trees, AI tools can play devil’s advocate for us.
- Client-originated artwork – if clients have used AI tools to create artwork or imagery themselves that they wish to use, we don’t refuse. It’s your website, after all.
The Ethics of AI
It is part of our mission to promote ethical uses of technology, as documented in our Ethical Statement. The ethics of using AI tools are murky at best.
On the one hand, they have the potential to provide great utilitarian gains in fields like science, medicine and law enforcement. However, there is significant potential for abuse in criminal enterprise and to spread disinformation. In this regard, this makes them comparable to any new technology. The main risk is in their ability to evolve more quickly than the ability to regulate or legislate.
Then there’s the training data. Much of the material used to train AI models has been scraped from the internet and used in commercial products without permission or understanding from the original authors.
Finally, some companies have fully embraced the hype around ‘AI’ and have already begun a process of substituting human employees for AI tools.
For these reasons, we remain cautiously excited about the potential of such tools for certain uses, circumspect about they hype they create, and committed to continually re-evaluating our policy on their use.
Understanding “Artificial Intelligence”
The final thing we always remind our clients is that “Artificial Intelligence” is (at least at the time of writing) neither of those things. The words, artwork and video that is used to train these models had to be created by humans originally, so isn’t really artificial. The outputs derived from this might be quickly generated, sound convincing and seem impressive to us, but it’s really just pattern matching and machine learning – not intelligence.
The term has become so misused and confusing that the industry had to come up with an alternative term (Artificial General Intelligence, or AGI) to refer to the search for a genuine machine intelligence.
If you have any questions about our use of generative AI tools, feel free to contact us.