UAlbany Researchers Reveal Geometry Behind How AI Agents Learn
By Michael Parker
Albany, N.Y. (Jan. 29, 2026) — A new study from the University at Albany shows that artificial intelligence systems may organize information in far more intricate ways than previously thought. The study, “Exploring the Stratified Space Structure of an RL Game with the Volume Growth Transform,” has been published online through arXiv.
For decades, scientists assumed that neural networks encoded data on smooth, low-dimensional surfaces known as manifolds. But UAlbany researchers found that a transformer-based reinforcement-learning model instead organizes its internal representations in stratified spaces—geometric structures composed of multiple interconnected regions with different dimensions. Their findings mirror recent results in large language models, suggesting that stratified geometry might be a fundamental feature of modern AI systems.
“These models are not living on simple surfaces,” said Justin Curry, associate professor in the Department of Mathematics and Statistics in the College of Arts and Sciences. “What we see instead is a patchwork of geometric layers, each with its own dimensionality. It’s a much richer and more complex picture of how AI understands the world.”
Tracking the Geometry of an AI at Work
The research team studied a transformer-based agent playing a memory and navigation game in which it must collect coins while avoiding moving spotlights. Each frame the agent saw became a “token,” similar to a word in a language model, and the team analyzed how these tokens were embedded inside the network’s internal layers.
Their results showed four distinct clusters of geometric dimension, revealing when the model perceived the environment as easy or complex. Lower-dimensional states typically occurred when the room was well lit or the agent had committed to a course of action. Higher-dimensional states appeared when the screen became crowded or when the agent needed to weigh multiple possible actions.
“These jumps in dimensionality reflect moments of uncertainty,” said Gregory Cox, assistant professor in the Department of Psychology, also part of the College of Arts and Sciences. “When the agent has to choose between competing actions or interpret a more complex visual scene, the geometry of its internal representations expands. It’s as if the model needs more room to think.”
A New Lens for Understanding AI Decision-Making
Using a technique known as the Volume Growth Transform, the researchers found that the model’s geometric patterns violated long-standing assumptions like the manifold and fiber-bundle hypotheses. Instead, many of the agent’s internal representations jumped across multiple strata, forming a geometric landscape with abrupt transitions rather than smooth curves.
According to the study, these changes often aligned with meaningful moments in gameplay, including:
- approaching a coin or goal state
- encountering new or overlapping spotlights
- pausing to evaluate multiple navigation options
“What’s exciting is that we can now link specific moments in the agent’s behavior to specific geometric features,” Curry said. “When the model is confused or exploring options, the geometry spikes. When it’s confident, the geometry flattens. It gives us a new vocabulary for understanding how AI makes decisions.”
The researchers note that monitoring geometric complexity could help identify the moments an AI system finds most difficult, opening the door to adaptive training methods that strengthen performance where the model struggles most.
As Cox put it, “Stratified geometry isn’t just an abstract concept. It gives us a new window into how both machines and minds might represent complicated information.”