Hugo Cheyne at Trailblaze wrote a really interesting piece of the use of learning maps to empower personalized AI tutors. Someone tagged me in a comment, pointing to some work we've done at Aampe on knowledge graphs. Hugo was kind enough to respond to that comment by asking for my take.
I was half-way done writing that take before I realized it was already far too long to give in a comment. So, with apologies to Hugo, I'll post my answer here. I think the issue he raises an important issue about agentic learning in general - a mindset shift that, at least initially, proves to be a challenge for many of our customers. Scroll to the end of this post to see that more general takeaway.
First, here's my reply to Hugo:
It's been a while since I worked in the education sector, so my thinking on this issue might be a little rusty. Also, I'm not deeply familiar with your particular application. Also, at Aampe, we solve for this kind of content-dependency issue in a particular way, so I might just be biased from being steeped in that particular mindset. All that being said:
Do you need to build those learning dependencies into the learning map itself? Wherever a student is in their learning there are a number of learning options available to them. The dependency path isn't singular or linear: learning competency 1 unlocks the ability to learn competency 2a, 2b, 2c, 2d, etc., even if a, b, c, and d represent very different subjects.
Curricula are often set out linearly because textbooks are linear by nature. One of the strength of AI-facilitated learning, to my mind, is the ability to break free from that arbitrary constraint.
So you could do the following:
1. Set out the initial learning map as just different pieces of content - different stuff someone could learn. The organization doesn't matter at first, because a new student doesn't have permission to access most of that content up front anyway (because there are dependencies and a new student hasn't satisfied any of them).
2. Place eligibility restrictions on the content. This is your dependency information. So if content C really shouldn't be tackled until competency has been demonstrated for content A and B, set that restriction. Likewise, track which competencies a student has already obtained.
3. Recommend content. For a particular user, first filter down content to only those modules a student is eligible for (based on dependencies fulfilled). You could then add in additional ranking mechanisms to select from what's left. Or you could just present students with options and let them choose.
4. Over a short period of time, you could then allow the structure of your learning map to emerge based on actual student navigation of the map. So modules D, E, and F might all be eligible after a student has demonstrated competency for module C, but you could find that students who do E next perform better than students who do D or F next. That means E should go after C - or even better, hedge your bets to get a Sankey type of path where E would have the thickest road and D and F would have relatively smaller roads.
This would allow the map to change over time as student behavior changes. Teachers could see dominant patterns, and see students who diverge from those patterns - perhaps indicating the need for differentiated learning. Students would also have a de-facto recommender system for learning objectives - they'd be able to see what paths other students had taken, allowing them to progress through their learning with more confidence.
I'm guessing you could probably find at least half a dozen beneficial second- and third-order effects from having a self-organizing learning map, but that requires you to separate the issue of dependencies out of the map itself. A dependency is nothing more than a tag you can attach to a piece of content. Users (students) can collect tags. That manages your dependencies.
Here's the more general takeaway for agentic learning:
A huge amount of the "structure" we think is necessary for good learning, good marketing, or good user engagement, actually does very little to facilitate learning, marketing, or engagement. The structure is a relic of an AI-less past where humans had to make more choices than they could realistically make, and therefore needed to impose semi-arbitrary structure in order to keep their heads above water. One of the most empowering aspects of using agents is that you can do away with most of that structure. Don't arbitrarily limit the audience eligible for any piece of content. Keep those constraints as wide-open as you possibly can, let the agents learn how to operate within that space (because that's what agents do), and then visualize what those agents are doing to learn more about the space yourself.
Some structure is necessary. But, having build agentic infrastructure for a while now, I've literally *never* seen a case where all the structure the human operators thought they needed was actually necessary. In most cases, they imposed anywhere from 2x to 10x too much structure.
Agents dynamically create structure. Don't tie their hands unless you absolutely need to.
Wow Schaun, Thank you! You've given me some real food for thought and also suggestions which are very relevant to upcoming decisions I have to make.
I completely agree on not using the chapter/ folder based approach to learning, and that's what the platform is based on, with mastery of one topic potentially opening up many other doors (see image on my reply to you on LinkedIn).
I really like the idea of monitoring which paths students take the most often and feeding that into an algorithm or AI system that helps to adapt the learning path over time. I'm assuming that's also where the suggestion to use tags comes in, as those are a bit more flexible than relationships in a parent/child topic table, and easier for an ML system to manipulate over time?
I had been thinking that the best way to curate the relationships was to have an AI make a first pass at mapping out the relations by methodically digesting a curriculum document. This would then be followed by having human teachers (who use the map as a curriculum map to have suggestions on the order in which to teach their pupils) giving input where they suggest a relationship has been mapped out incorrectly - similar to Wikipedia style editing where consensus is formed over time about the 'true' relationships by a community of experts with an interest in maintaining the node. The aspect of AI being able to observe the paths taken by students is a brilliant addition to this, and I think that to make it work I just to find a good measure of what success looks like for a student, so that I have a good goal for the AI to optimise towards.