Generative AI won’t take your UCD job, but it might change it

Kate Stulberg
7 min readJul 30, 2024

--

I know, I know. Another blog about generative AI? Haven’t we heard enough?! Maybe. But I’ve recently been working on a few gen AI projects, and they have got me thinking about what this could mean for UCD professionals and how we design and deliver services.

Setting the scene: where we currently are with generative AI

In my current role at Ministry of Justice, I’ve been part of a working group that’s experimenting with generative AI to understand how it could support research practice. I’ve also been involved in a series of experiments that are exploring how generative AI could be used in public services. Here, generative AI is being considered as a potential way to streamline user journeys, reduce duplication, and make information easier to access and understand — all key features of good service design. So, it makes sense to leverage the current excitement (and investment!) around generative AI to try to improve the quality of our services.

Right?

Right! But, we should also keep in mind that we don’t actually know yet if generative AI can solve these problems effectively. At least in a government context, most gen AI projects are still in the proof of concept stage. User feedback therefore relates primarily to someone’s attitudes as opposed to their real life behaviour. Generally, trust in generative AI remains low, as does AI literacy. We don’t know how people understand or want to engage with AI. And within the public sector, we’re still figuring out our risk appetite and tolerance for error when using generative AI, as well as when, where and how a ‘human in the loop’ should be integrated with this technology.

What all of this suggests to me is that we need more time to understand generative AI’s place in our world, before we can confidently declare it as the key to solving user problems. Indeed, we also need to understand it better before we can say for sure that it will or won’t take anyone’s jobs! But, as I’ll outline below, I feel our focus should be on how to adapt UCD to accommodate generative AI in our work, rather than spending too much time worrying that the robots are about to take over!

The risk of solutionising, and why this matters for UCD

The buzz around generative AI poses a risk that is all too familiar to UCD folk — that of ‘solutionising’. All of the projects I’ve worked on recently, as well as many others I’m aware of across the public sector, were set up to explicitly develop generative AI solutions. Although I understand that we have to experiment with gen AI in order to use it well in the future, it’s worth highlighting the risk that some stakeholders may see this experimentation as evidence that generative AI is the de facto solution — even if we don’t yet have the evidence that it will meet user needs.

This point alone affirms for me that UCD is a skillset that will still be required in the so-called ‘age of AI’. To ensure that all our hard-fought evangelism for user needs is not lost to misplaced excitement over shiny, new tech (again, not a new problem — think of every time we’ve been asked to build a ‘portal’ or a ‘one stop shop’ without knowing if these solutions actually solve user problems), we must continue to build stakeholder literacy around not only how generative AI works and what’s possible, but also why designing user-centred services will lead to better outcomes.

This will mean continuing to prioritise user research — to ensure we have a solid understanding of our users and their needs. We’ll also need to keep making space for solution-agnostic ideation, to avoid falling into the ‘innovation trap’ — a related challenge to ‘solutionising’, where innovation is assumed to always produce the best outcome for users. One way to practically do this is to openly and continuously acknowledge assumptions and biases, to find novel ways of testing the things that we think we know. This should help us keep decision-making transparent, objective, grounded in evidence, and ultimately agile enough to be able to pivot quickly, cheaply, and without ego if we find we’ve got an assumption wrong.

So, what changes?

While many aspects of UCD will stay the same, I think other parts will change. For instance, we’ll need to rethink how we prototype and test our services with users. Our typical usability testing process gives users clearly defined scenarios and tasks to work through on a prototype — but this won’t actually work with generative AI. When we add gen AI into our services, we’re also adding a new variable into our testing methodology — the AI itself. It’s not possible to predict exactly what a user will see when they engage with generative AI. Even if two users asked the same AI model exactly the same question, we know that the model would not generate exactly the same response.

As a result, we’ll have to practically adapt some UCD tasks. User researchers may need to prepare multiple discussion guides within their plans, so they are better equipped to pivot their line of questioning after observing the real-time interactions of user and machine — interactions that will be more difficult to predict in advance. Designers may need to shift away from user journeys that offer fixed paths and end-points, instead thinking about the multiple, open-ended possibilities that users can embark on to achieve their goals using AI. This could also lead designers to rethink design patterns — particularly if we learn over time that users complete tasks in very different ways, with different mental models and expectations. Might there be new design patterns we need to create, or might existing patterns need to be more flexible to accommodate this shift?

Related to this, I think during synthesis we’ll spend just as much time trying to understand AI behaviour as we will user behaviour. Perhaps we’ll even go beyond thinking of generative AI as a new variable, or an opportunity or constraint to our designs. We might begin to think of it as its own user group, one that we must understand in order to deliver good services. Alex Klein recently described this phenomenon after conducting a usability test on a generative AI feature and feeling like“there were two participants […] both deserving my careful attention as they tried to complete a task together.

Building our understanding of AI behaviour relates, of course, to our AI literacy. Several UCD teams working on generative AI have told me that the most effective experiments have been when they’re embedded with the data scientists and developers building AI models. This likely not only increases their pace of learning about generative AI, but it also ensures that they better understand its two-way relationship with users, thereby ensuring it is harnessed appropriately to meet user needs.

Which brings me onto my final, yet perhaps slightly controversial point — I wonder if generative AI will push us to rethink how we apply design thinking to our work. If you think of the traditional double diamond, we start by focusing on our users and their problems. We diverge to build this understanding, before converging on specific user pain points to ideate from. Then, we diverge again to come up different possible solutions, before converging on those that are desirable, feasible and viable.

A pink diamond titled ‘discovery’, followed by a blue diamond titled ‘alpha’. Both diamonds indicate the proess of diverging before converging.
The classic double diamond — moving from discovery to alpha

But, does this work when you throw generative AI into the mix? Or at least right now, when we’re all trying to understand its potential. Perhaps it would make more sense to simultaneously learn about user problems whilst experimenting with generative AI capability. Here, I don’t mean falling prey to ‘solutionising’ or the ‘innovation trap’, but rather to bring technical experts into discoveries much earlier, and to stretch our thinking around technical feasibility during ideation. Maybe we could blend the double diamond approach with elements of a more hypothesis-driven, innovation-style approach to design — blending deep discovery with rapid experimentation, as opposed to seeing them as incompatible binaries (nb: I recognise some teams are already doing this).

A pink diamond titled ‘discovery’ with four small blue circles titled ‘alpha experimentation’ within the diamond.
A blended discovery and alpha?

There’s likely all sorts of risks and considerations that we’d need to think about if we did shift our thinking in this way — not least, what guardrails would be needed to ensure we stay focused on solving user problems even during early experimentation. But it certainly is some interesting food for thought.

To summarise — the more I learn about generative AI, the less concerned I am that it is developed enough to be taking our jobs. Indeed, I only become more convinced that UCD is a critical part of holding innovation to account, ensuring new solutions actually solve real problems for users. But, whilst I don’t think our professions are at risk, I do think we’ll need to adapt some of our activities and thinking to accommodate generative AI, and generally prioritise increasing our AI literacy.

What do you think? Is this how you see UCD being affected by generative AI? What else should we be prepared for? If anyone wants to chat about this further, do drop me a line!

With thanks to James Halliday and Craig Beaton for proofreading.

--

--

Kate Stulberg
Kate Stulberg

Written by Kate Stulberg

Senior User Researcher at Ministry of Justice. Previously Citizens Advice, NHS Digital & Action for Children.

Responses (1)