👯 Have you got a digital twin?

With Erica Schoder, Executive Director at R Street.

In today’s interview, Erica discusses the use of LLMs in the policy world, specifically digital twins. A digital twin “is a virtual representation of a physical object or system” (IBM), and in the context of Erica’s comments, we’re talking about virtual representations of people.

I’ve often wondered whether anyone has ever asked their LLM how Cast from Clay would think about a specific communications challenge, not least because I do it with reference to other people all the time.

Over the past months, I’ve been writing Cast from Clay’s commercial strategy, positioning, and everything that goes with it, including the role and function of values in the company, and whether we should talk about values or behaviours (or virtues). It was hugely useful to explore with an LLM how Rawls vs Nietzsche vs Aristotle would reflect on that.

It’s not just dead people. There are a wealth of current thinkers on strategy that you can ask an LLM to emulate. But plain sailing it is not. Erica points to the lack of consequences for LLMs as a fundamental challenge in using them: we all know the difference when working with an entity that has skin in the game, compared to one that does not.

My problem was that the LLM’s critiques ended up being a caricature of the thinker in question. For the people whose work I am familiar with, it was clear that the LLM had a solid understanding of what their arguments were—it was all technically correct, but it was contextually wrong.

There is still room for humans yet…

Tom

Policy Unstuck with Erica Schoder, Executive Director at R Street.

Values are revealed in behaviour, not in what you say

What are the signs that an organisation actually has values? It’s how its people behave and the decisions that it makes. Full stop. It is all in the behaviour. Not what you say you believe in, or what you say about how you do things. When a hard moment arrives, what is that decision that you make?

A good example of that for us is our decision on January 6th to make a statement condemning the attack on the Capitol. We were one of the only right-of-centre organisations to do so. In that moment, we weighed our mission and our values, our commitment to the Constitution, to the rule of law and to the democratic process–not what our funders or stakeholders thought or would say about it. 

We knew that it would cost us something, but we also knew that we had to stand up for our principles. It was a time when people were making lists of who they would partner with and be in coalition with. There were blacklists going on, and absolutely we were blacklisted. But it was a price we were willing to pay.

Long term, the relationships that mattered became stronger. It was a signal that we are who we say we are, and that we’re going to stand up for our principles. That gave us more credibility in the end.

The role of organisational structure in organisational values

One of our values is ‘one team’. You can say that all day long, but if you don’t create the structures to enable coordination between teams, it’s just a platitude.

We developed a model called ‘programme teams’ to address this. Take our Energy and Environment policy area. All the different functions–research, communications, etc.–are represented on that programme team. That team is the one making the decisions about what we do and what our impact strategy is.

That means that communications knows the resources they’re coming in with, and can make decisions about their realm, but they’ve listened to everyone else about the trade-offs and what others can live with. So when they make decisions, they know the limits; it’s a mechanism to get coordination. 

LLMs are a blurry JPEG of human encounter

Humans have tacit, inarticulate knowledge; our practitioner, contextual experience of being in the world. It is completely unique to every individual. LLMs are amazing, but they strip all the context out of knowledge, out of ideas, out of information. There’s no context in them. 

There’s that idea from Ted Chiang comparing LLMs to a blurry JPEG of an idea or of some things that humans did in the world. It’s decontextualised. 

AI is super powerful, and helps us see patterns and do all kinds of crazy things with that decontextualised information. But what it cannot do is reproduce the actual encounter, the actual moment of decision or imagination in a human brain.

The challenges of AI adoption

Anybody who is implementing AI in an organisation will understand the challenges. At R Street, we have so much diversity of disposition towards the tool, all the way from doomers to evangelists. You also have the ability, or capability, of people who are using it and are just stellar at it, and others who don’t want to ever start using it.

We’ve had a lot of pushback from junior staff who do not use AI, who do not want to, and who are fearful of it replacing them. And that is fair: I can totally see how the ladder is being pulled up. Some organisations are not hiring at the most junior level because they think AI can essentially be a research assistant. 

But you’re also going to see an AI-native generation who just naturally use it. The challenge for them is how they form an ability to exercise judgment. And that comes back to us as managers: how do we train them in this area? If junior staff don’t understand what good looks like, there is no way for them to evaluate the output of an LLM.

Some organisations have addressed that by saying, ‘Look, you can’t use AI as a junior staffer. You need to learn it by doing.’

The augmented intern: a paradox

I’m developing a framework called ‘the augmented intern’. Let’s say I want to create a digital twin of you, Tom. I want to quantify all of your practitioner knowledge and everything you’ve written about, and I create this digital twin based on your thinking that can help a junior staffer understand what good looks like.

But then what happens to your role? What about the Tom whose knowledge has now created a digital twin? What is your value?

→ If you want to explore digital twins, RAG databases, and how to use AI to increase the efficiency of your work, take our instructor-led online training. Starts 29th April.

I think there are two elements here. Firstly, contexts will change, so judgment will change, and digital twins will need to change as well. At the moment you build it, a digital twin encodes only some of you, the part that is accessible at that point in time. That means that a human has to keep creating that knowledge and articulating it: LLMs are the exhaust of human knowledge, not its creator.

The second is consequences. I love using AI for creating digital twins—although it can be kind of creepy. But it is not the same as Tom, who has stakes in the outcome, actually telling me what he would do in this situation. That conversation is totally different compared to chatbot digital-twin Tom. The real Tom has a reputation, and relationships and is continually discovering what kinds of consequences he can live with. There are no stakes with a digital twin, it’s a representation of you, it has no consequences.

I think AI is going to augment a lot of what we do. The question is what we keep doing ourselves and what we hand off. And whether we’re strengthening human agency and judgement along the way. That’s our job to figure out.

Thank you to the 114 Policy Unstuck readers who have referred this newsletter to a colleague. The views expressed in Policy Unstuck interviews are those of the interviewee, and do not necessarily represent Cast from Clay’s view.