Emotionally manipulative fake news poses a serious challenge to traditional misinformation detection models, which are often sensitive to sentiment-driven cues. This paper introduces a novel reinforcement learning–based framework aimed at improving robustness against emotional manipulation in fake news. The proposed multimodal model integrates textual representations of news content with aggregated emotional signals from user comments, while combining reinforcement learning and adversarial training to encourage stable predictions under sentiment perturbations. Experiments on large-scale benchmarks demonstrate that the approach consistently outperforms standard transformer-based baselines and exhibits strong generalization in zero-shot settings across multiple datasets. By explicitly addressing emotional adversarial behavior, this work highlights the effectiveness of reinforcement learning for robust fake news detection and opens new directions for emotion-aware misinformation analysis.