🧠 How to get government to think long-term

James Ancell, the Deputy Director for AI Integration at the Department for Science, Innovation and Technology, and the former Head of Futures and Foresight at the Cabinet Office, speaks to Tom Hashemi.

One downside of democracy is its structural incentive for short-term thinking—politicians want to win the next election, which is never that far away.

For those of us who would like to see government deliver a more long-term agenda, that presents a challenge. Why would the government of the day do it? The system in which they exist incentivises them not to.

With that in mind, it was a great pleasure to speak with James to think through how you get organisations to think more long-term. When we spoke in late December, he was the Head of Futures and Foresight at the UK Cabinet Office—how you get people to take a long-term view was his job.

We also talked at length about AI given the role he was about to step into. Given that so many of you have taken our Generative AI course—I hope you’ll find his thoughts relevant and useful.

Tom
P.S. my colleague Aneesha is hiring a consultant role (2-3 years experience, £30-35k salary) onto her team. More info here.

💡 Forthcoming training

How do you get organisations to think long-term?

The first thing I would say is co-create. My team was set up to do analysis on the long-term future of the UK. As part of that, we interview decision-makers and experts. If you involve some of those stakeholders in the process, then they're more likely to become champions for the work in the long run.

The second thing is linking things to the urgent. There's been a lot going on in the UK in the last 10 years. Brexit, Russia-Ukraine, we've had elections, and so on. You can sometimes point to things where there is media attention, or the civil service is focused, and then use that to have conversations about some longer-term trends. 

The third thing I'd say is making time for it. We'll often go to board meetings or meetings of senior people and say, ‘Look, let's set some time aside to talk about this.’ Often the reason people find this hard is because they don't have time to do it. But if you can make time, ring-fence it, then it usually goes down well.

Baby steps into the world of foresight…

If you want to think about second, third-order consequences, there's a technique we use called the Futures Wheel. You take an issue, you put it in the middle of a big bit of paper, and then you look at all the cascading things that could occur. 

So, a pandemic leads to work from home, which then leads to maybe improved outcomes for kids, but maybe it leads to worse outcomes for businesses. It’s a useful tool to look at all the different cascading impacts.

The kind of case study Keir Starmer would love

The example I usually give of how useful AI is… imagine some kind of disruption to the Brazilian rubber supply that would be harmful for the UK car industry because it means we can no longer make tyres. 

If you think about how we would have to manually find out about that and start addressing it: we’d need someone who speaks Portuguese, someone monitoring the Brazilian newspapers who also has a good understanding of the UK economy and the different departmental equities. And then imagine doing that for all the other issue areas, all around the world. It would be impossible. 

But AI can do that really quickly because you can just give AI the context and say, ‘Look, I care about the government's priorities here and I care about these departments. Now go look around the world for stuff that's going to hurt that.’ And AI is very good at that. It would be so labour-intensive to do it manually, you would never try.

But… keep humans in the loop

Remember though that AI will flag up a lot of things that might be false positives. What you don't want to do is scare people with things that sound scary but really aren't, so you need to keep the human in the loop, a human analyst checking outputs. 

An outbreak of a disease in a country might not be that big a deal, for example, but at first glance for me as someone who doesn't know very much about diseases, it might be terrifying.

The young ‘uns showing us how

We are seeing people at the lowest levels of organisations finding the biggest efficiencies because they're often doing the repetitive tasks and they're AI native.

They might not have the judgement to know exactly what's going to work, but they have the technical knowledge and they're embedding with people who do have that expertise. 

People always say, 'Oh, I'm worried about graduates not getting jobs in the future.' I don’t know about that… they're doing it already.

It’s not the teacher’s job to grade, it’s yours 

We grade everything we produce. Grade D means nobody's read it. Grade C means we get lots of feedback, people like it, or lots of people read it. Grade B, it influences people, and we hear it discussed. And then Grade A is where we see some kind of action on the back of it, like calling a meeting or agreeing to investigate something further.

What we’re trying to answer is whether what you're doing is actually having an outcome or is it just interesting?

For Grade A, you’re looking for a story of how your work might have been a contributing factor to some decision. Invariably, there's a million other contributing factors that go to that decision, so you're trying to produce a convincing narrative that our work mattered. 

For the annual horizon scan, we might present to 900 people in one go, so we’ll say, ‘Put in the chat what you're going to do now.’ And then the audience will give you Grade A immediately because they're giving us all of these lists of things they're going to do.

If we don't get Grade A, we stop and look at why. That process lets you get better every year because you move into continuous improvement, rather than continually starting from inception.

Thank you to the 87 Policy Unstuck readers who have referred a colleague.