Agent-based methods allow for defining simple rules that generate complex group behaviors. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long-term behavior policies by using properties of a Markov decision process. We use the linearly-solvable Markov decision process to learn the local rules governing collective movement for a self-propelled particle simulation and a captive guppy population application.