Greg Easthouse asks:
How long until we get a wardleyagent gpt ai?
Well, Greg, I’m afraid it’s too late. ChatGPT has already taken the lead on this one.
Or rather, Steve Pereira kicked the exploration off. So whatever happens next is his fault now. 😛
I, for one, welcome our Wardley-based autonomous overlords! 🤖
In all seriousness, because capitalism isn’t meritocratic but wears meritocracy like a wolf wears sheep’s clothing, even if Wardley Mapping was the most perfect, deserving methodology to grace our lives, even then it wouldn’t be a sure thing that it would survive long enough in the market for someone to create an autonomous AI agent that implements its ideas.
However bearing in mind that my expertise is not AI or ML or LLMs or whatever else, I suspect there are parts of Wardley Mapping that we could eventually use AI to enhance.
For context, I already think of the strategic thinking process in Wardley Mapping in terms of a rudimentary algorithm. It goes a little something like this:
- For each component in the Wardley Map:
- For each item in the doctrine, climate, and leadership tables:
- Ask: “Is this relevant?”
- If it is relevant, consider it deeply.
- For each item in the doctrine, climate, and leadership tables:
For example, I can pick something on my Wardley Map and ask whether the climate pattern “Everything evolves through supply and demand competition” is relevant or noteworthy to it.
Sometimes it is, sometimes it isn’t, of course. That’s where human judgement usually applies.
Asking the question is something I think an AI could do, but I have reservations about it giving answers, even if it had plenty of human decisions to train on.
Strategy also often requires asymmetry. An AI model would only be able to reproduce options based on what it “learned” from its training data.
But at the same time, human decisions right now often don’t really do the sensible “symmetrical” option. So an agent suggesting hypothetically symmetrical options could create benefits. But could you trust those options?
I feel conflicted.
I suppose, if symmetry came from what Wardley has shared publicly (there’s plenty he hasn’t, I suspect)… and I guess if everyone had their own personal Simon Wardley whispering suggestions in their ears, we’d hit an equilibrium of strategic gameplay and need to find a way to create asymmetry again. And that would necessitate human skill, just like we’re seeing as people adjust to the new normal of ChatGPT.
Regardless, a more immediate challenge for an AI, I think, might be the creation of the Wardley Map itself. Ontology is hard. There are weird papers out there that I like to skim to remind myself how hard it is. (For example, GRANULAR COMPUTING APPLIED TO ONTOLOGIES, DOI:10.1016/J.IJAR.2009.11.006).
I suppose you could probably teach an AI to make a Wardley Map by extracting the right nouns and inferring the right relationships from a set of news articles or similar. But could you trust that model? What would that model privilege? (What biases would be baked in?) And how would that impact the decisions it would make?
I could imagine the utility of AI as decision support, to make sure you consider all the things that ought to be considered. But an autonomous agent making decisions? I’m nervous about it.
So, to answer Greg’s question… How long? Pick one:
- Never!
- Or it happened already. (Thanks, Steve Pereira!)
- Or in 12 years. (That’s a link to Wardley’s personal blog, btw. Worth a look!)
Fun question. Thanks for asking, Greg. 😉
Addendum: Mr. Wardley has been doing something interesting with ChatGPT and Bard, using these tools as interfaces to their underlying training data. Read more about his research here.