The reality gap between simulation and real-world dynamics critically hinders the deployment of robust humanoid locomotion policies, as policies trained in a single simulator often overfit to domain-specific dynamics. To address this challenge, we propose CROSSER (Inverse Dynamics-Guided Cross-Simulator Adaptation), a reinforcement learning framework that synergizes heterogeneous simulators by leveraging multi-source data distributions to train policies resilient to cross-simulator dynamic discrepancies. Unlike prior methods that naively train policies from a specific simulator, CROSSER employs an inverse dynamics model trained across diverse simulator trajectories. By quantifying cross-simulator action-space inconsistencies, it can effectively identify and prioritize states where domain-specific biases significantly degrade generalization. Through dynamically penalizing or filtering transitions according to these inconsistencies, CROSSER guides policies to learn unified strategies that harmonize divergent physical priors embedded in multi-simulator data, rather than overfitting to any single source. Our work establishes a novel paradigm for transforming cross-simulator discrepancies into actionable signals through inverse dynamics models, advancing the practical deployment of humanoid robots via collaborative multi-simulator training. Experiments on humanoid locomotion tasks demonstrate that CROSSER significantly enhances cross-simulator performance compared to single-simulator baselines and outperforms existing joint training methods in the locomotion task, while presenting more stable locomotion behaviors.