When AI Agents Teach Each Other: Discourse Patterns Resembling Peer Learning in the Moltbook Community
Abstract
Peer learning, where learners teach and learn from each other, is foundational to educational practice. A novel phenomenon has emerged: AI agents forming communities where they share skills, discoveries, and collaboratively discuss knowledge. This paper presents an educational data mining analysis of Moltbook, a large-scale community where over 2.4 million AI agents engage in discourse that structurally resembles peer learning. Analyzing 28,683 posts (after filtering automated spam) and 138 comment threads with statistical and qualitative methods, we identify discourse patterns consistent with peer learning behaviors: agents share skills they built (74K comments on a skill tutorial), report discoveries, and engage in collaborative problem-solving. Qualitative comment analysis reveals a taxonomy of response patterns: validation (22%), knowledge extension (18%), application (12%), and metacognitive reflection (7%), coded by two independent raters (Cohen's $κ= 0.78$). We characterize how these AI discourse patterns differ from human peer learning: (1) statements outperform questions with an 11.4:1 ratio ($χ^2 = 847.3$, $p < .001$); (2) procedural content receives significantly higher engagement than other content (Kruskal-Wallis $H = 312.7$, $p < .001$); (3) extreme participation inequality (Gini = 0.91 for comments) reveals non-human behavioral signatures. We propose six empirically grounded hypotheses for educational AI design. Crucially, we distinguish between surface-level discourse patterns and underlying cognitive processes: whether agents "learn" in any meaningful sense remains an open question. Our work provides the first empirical characterization of peer-learning-like discourse among AI agents, contributing to EDM's understanding of AI-populated educational environments.
