- Policy Unstuck
- Posts
- đŻ Have you got a digital twin?
đŻ Have you got a digital twin?
With Erica Schoder, Executive Director at R Street.

In todayâs interview, Erica discusses the use of LLMs in the policy world, specifically digital twins. A digital twin âis a virtual representation of a physical object or systemâ (IBM), and in the context of Ericaâs comments, weâre talking about virtual representations of people.
Iâve often wondered whether anyone has ever asked their LLM how Cast from Clay would think about a specific communications challenge, not least because I do it with reference to other people all the time.
Over the past months, Iâve been writing Cast from Clayâs commercial strategy, positioning, and everything that goes with it, including the role and function of values in the company, and whether we should talk about values or behaviours (or virtues). It was hugely useful to explore with an LLM how Rawls vs Nietzsche vs Aristotle would reflect on that.
Itâs not just dead people. There are a wealth of current thinkers on strategy that you can ask an LLM to emulate. But plain sailing it is not. Erica points to the lack of consequences for LLMs as a fundamental challenge in using them: we all know the difference when working with an entity that has skin in the game, compared to one that does not.
My problem was that the LLMâs critiques ended up being a caricature of the thinker in question. For the people whose work I am familiar with, it was clear that the LLM had a solid understanding of what their arguments wereâit was all technically correct, but it was contextually wrong.
There is still room for humans yetâŚ
Tom
Policy Unstuck with Erica Schoder, Executive Director at R Street.
Values are revealed in behaviour, not in what you say
What are the signs that an organisation actually has values? Itâs how its people behave and the decisions that it makes. Full stop. It is all in the behaviour. Not what you say you believe in, or what you say about how you do things. When a hard moment arrives, what is that decision that you make?
A good example of that for us is our decision on January 6th to make a statement condemning the attack on the Capitol. We were one of the only right-of-centre organisations to do so. In that moment, we weighed our mission and our values, our commitment to the Constitution, to the rule of law and to the democratic processânot what our funders or stakeholders thought or would say about it.
We knew that it would cost us something, but we also knew that we had to stand up for our principles. It was a time when people were making lists of who they would partner with and be in coalition with. There were blacklists going on, and absolutely we were blacklisted. But it was a price we were willing to pay.
Long term, the relationships that mattered became stronger. It was a signal that we are who we say we are, and that weâre going to stand up for our principles. That gave us more credibility in the end.
The role of organisational structure in organisational values
One of our values is âone teamâ. You can say that all day long, but if you donât create the structures to enable coordination between teams, itâs just a platitude.
We developed a model called âprogramme teamsâ to address this. Take our Energy and Environment policy area. All the different functionsâresearch, communications, etc.âare represented on that programme team. That team is the one making the decisions about what we do and what our impact strategy is.
That means that communications knows the resources theyâre coming in with, and can make decisions about their realm, but theyâve listened to everyone else about the trade-offs and what others can live with. So when they make decisions, they know the limits; itâs a mechanism to get coordination.
LLMs are a blurry JPEG of human encounter
Humans have tacit, inarticulate knowledge; our practitioner, contextual experience of being in the world. It is completely unique to every individual. LLMs are amazing, but they strip all the context out of knowledge, out of ideas, out of information. Thereâs no context in them.
Thereâs that idea from Ted Chiang comparing LLMs to a blurry JPEG of an idea or of some things that humans did in the world. Itâs decontextualised.
AI is super powerful, and helps us see patterns and do all kinds of crazy things with that decontextualised information. But what it cannot do is reproduce the actual encounter, the actual moment of decision or imagination in a human brain.
The challenges of AI adoption
Anybody who is implementing AI in an organisation will understand the challenges. At R Street, we have so much diversity of disposition towards the tool, all the way from doomers to evangelists. You also have the ability, or capability, of people who are using it and are just stellar at it, and others who donât want to ever start using it.
Weâve had a lot of pushback from junior staff who do not use AI, who do not want to, and who are fearful of it replacing them. And that is fair: I can totally see how the ladder is being pulled up. Some organisations are not hiring at the most junior level because they think AI can essentially be a research assistant.
But youâre also going to see an AI-native generation who just naturally use it. The challenge for them is how they form an ability to exercise judgment. And that comes back to us as managers: how do we train them in this area? If junior staff donât understand what good looks like, there is no way for them to evaluate the output of an LLM.
Some organisations have addressed that by saying, âLook, you canât use AI as a junior staffer. You need to learn it by doing.â
The augmented intern: a paradox
Iâm developing a framework called âthe augmented internâ. Letâs say I want to create a digital twin of you, Tom. I want to quantify all of your practitioner knowledge and everything youâve written about, and I create this digital twin based on your thinking that can help a junior staffer understand what good looks like.
But then what happens to your role? What about the Tom whose knowledge has now created a digital twin? What is your value?
â If you want to explore digital twins, RAG databases, and how to use AI to increase the efficiency of your work, take our instructor-led online training. Starts 29th April.
I think there are two elements here. Firstly, contexts will change, so judgment will change, and digital twins will need to change as well. At the moment you build it, a digital twin encodes only some of you, the part that is accessible at that point in time. That means that a human has to keep creating that knowledge and articulating it: LLMs are the exhaust of human knowledge, not its creator.
The second is consequences. I love using AI for creating digital twinsâalthough it can be kind of creepy. But it is not the same as Tom, who has stakes in the outcome, actually telling me what he would do in this situation. That conversation is totally different compared to chatbot digital-twin Tom. The real Tom has a reputation, and relationships and is continually discovering what kinds of consequences he can live with. There are no stakes with a digital twin, itâs a representation of you, it has no consequences.
I think AI is going to augment a lot of what we do. The question is what we keep doing ourselves and what we hand off. And whether weâre strengthening human agency and judgement along the way. Thatâs our job to figure out.
Thank you to the 114 Policy Unstuck readers who have referred this newsletter to a colleague. The views expressed in Policy Unstuck interviews are those of the interviewee, and do not necessarily represent Cast from Clayâs view.