Skip to content
Menu

Are you wondering what to do about generative AI?

#How to
Are you wondering what to do about generative AI?
Back to Articles

Today we launch a new training course designed for policy communicators who want to use artificial intelligence to enhance their work – and prepare for a future where communications will increasingly be infused by AI, and audiences will be using AI to consume content. 

The course is authored, and delivered, by Nick Scott, the former head of digital at Médicins Sans Frontières and UNISON, and co-founder of WonkComms. The interview with Nick below sets out his view of the AI challenges policy organisations face and what they should be doing about it. You can find out more about the course, and sign up for the first cohort, here.


I remember sending out an email to staff saying ‘we’ve just opened a Twitter account, but don’t worry, I’m not going to waste any of my time on this thing’. Obviously that has changed: people see the impact (for good and bad) of all of the different digital channels we have. The thing that hasn’t changed is we don’t have a sense of where the results come from. Which of the different things we’ve done has actually led to the impact we want? Whether it’s the right person reading something or commenting on something, or the content being placed in the right place. Quite often there is a big disconnect in the understanding of how influence happens. AI is going to make that even more of a challenge.

Artificial intelligence is a challenge to policy communications models because it acts as an intermediary. Today, I know that if the right person gets into the right room with the right policy maker, a conversation will happen, hopefully an influential one. But beyond that, how influence happens is already hard to gauge. AI threatens to make it much worse, because a lot of the direct contact could well be intermediated by AI. An AI agent will synthesise and personalise communications before they are passed to the policymaker or their team. So what exactly will they see? What does it mean for policy communicators if our audiences are intermediated by personalised agents? How do we influence robots? Policymakers are overloaded, they are obvious targets for one of the key promises of AI – its ability to cut through the noise. If we’re going to successfully influence policymakers then, we need to understand what every individual person’s preferences are because that’s how you’re going to influence their individual agent. It adds a whole level more complexity to policy communications. 

Very few organisations are thinking about the day-to-day impact AI will have. We can use a technology a lot, but we find it really hard to understand how it will impact us. For example, lots of people engaged with social media early on, but few thought about what it means when attention becomes the currency of every single social interaction. For AI, we can engage with the productivity conversation around it, but we find it very hard and don’t talk enough about where that goes and how that alters societal behaviours. It’s not about people seeing the future, but that people aren’t really thinking about how this technology will create a substantially different world.

The quality of result an organisation is going to get when it starts experimenting with AI is going to depend on the extent to which its digital infrastructure is in order. In the same way that you don’t really get full value from digital if you don’t look at your processes, your culture, your ways of working, your infrastructure, and how your data is organised, the same thing is true for AI. Yes, you can use Copilot or ChatGPT on your desktop and get a certain amount of value, but all those underlying pieces of work are key to delivering value across an organisation and at scale. If you want to create thousands of personalised emails to promote a report, that process is going to rely on how much work you’ve done on your CRM, and how developed your understanding of your audiences’ interests is. Creating personalised content also relies on your culture of experimentation. You can’t experiment in research communication without researchers being part of the conversation too in case you misrepresent their work, but that’s a big cultural thing which many organisations are possibly not well set up to handle.

My advice to those in policy organisations is to try a sustained period of using generative AI. Try using it in lots of ways, not because I think it will help you in everything you can do, but because unless you invest the time in it, you won’t understand its potential. The most common refrain I come across is something along the lines of ‘I put a few words into ChatGPT once and what it gave me back was really generic’. People do this because we’re very attuned to a world of Google, where we put in a few words, it gives us some responses, and then we have to do all the work to find the right answer. Generative AI tools require a lot more upfront work to get good results. This is partly what this course on artificial intelligence focuses on: working through how you get good results from these tools.

Leaders especially need to go through that learning process. There are things here that will potentially change how a lot of people in your organisation work. Leaders have a duty of care over their employees, especially where there is a risk of roles and skills changing dramatically. And for them personally, leaders are the ones who can get a lot out of generative AI. There’s a lot written for leaders, a lot of data on leadership, and leaders are often quite lonely, so here’s something that isn’t a person, but is based on other peoples’ thoughts, that can be a useful thinking companion.

The course is designed around how you can get results from generative AI, and what to do when you’re ready to take the next step. We bring comms use cases to inspire you, and go through the specifics of what generative AI means for policy organisations. It is not a technical course – you don’t need to know, and we don’t talk about, how artificial intelligence works. But there are certain things that you need to know in order to get better results and we go through those. And when you’re ready to move further on, and use AI for more specialist tasks, the course helps you think about workflows, and think about specialisation in generative AI. So for example, we look at workflow mapping and how to identify where you could run an AI agent to improve a process. We also address the ethical questions in this area. What are the things you need to be aware of from an ethical perspective in using generative AI? How can you mitigate them? What can you do to be a more responsible user of generative AI? And we finish with what it all means for the future. 

Artificial intelligence presents an existential challenge for policy communicators. Part of our role has been to translate stuff from one format into other formats that are more strategic, better targeted, and more likely to lead to policy influence. This technology threatens to automate that process. I don’t necessarily believe that has to happen, but to prove that, we’re going to have to become quite adept at using the tools to be even better than the tools by default. That human-AI partnership is central to what the course is about. It’s about augmenting your capabilities. 

Generative AI for policy communicators

Find out more
RELATED ARTICLES
What does it take to be a trusted advisor?
What does it take to be a trusted advisor?
{Cast from Clay}
#18: How the ONE Campaign rewired its influence strategy
#18: How the ONE Campaign rewired its influence strategy
{Tom Hashemi}
Cast From Clay CEO begins London to Lviv bike ride for Ukraine fundraiser
Cast From Clay CEO begins London to Lviv bike ride for Ukraine fundraiser
{Cast from Clay}