A Dynamic Fusion Model for Consistent Crisis Response

Published in Empirical Methods in Natural Language Processing (EMNLP 2025 Findings) – Accepted, 2025

Abstract
Large Language Models (LLMs) have shown promise in supporting crisis informatics, but responses generated by a single model often lack consistency across different crises and evaluation dimensions.

We present a Dynamic Fusion Model (DFM) that integrates outputs from multiple LLMs using weighted fusion and hybrid fusion strategies. The approach dynamically adjusts contributions from different models based on task-specific performance, producing more consistent, reliable, and balanced responses.

Evaluation across crisis-related datasets (COVID-19, Influenza, HIV) demonstrates that DFM reduces variance while maintaining high performance in politeness, informativeness, relevance, and accuracy. Human preference studies further validate that fused outputs are more consistent and trustworthy compared to individual models and static ensemble methods.

Keywords: Crisis Informatics, Multi-Model Fusion, Large Language Models, Consistency, Human-Centered NLP


Contributions

  1. Dynamic Fusion Model for Crisis Response
    • Proposes a modular framework that adaptively fuses outputs from multiple LLMs to ensure consistency across evaluation metrics.
  2. Fusion Strategies
    • Weighted Fusion: Assigns weights to models based on historical performance.
    • Hybrid Fusion: Combines direct fusion with refinement for balanced generation.
  3. Cross-Crisis Evaluation
    • Extensive experiments across COVID-19, Influenza, and HIV scenarios show reduced variance and improved reliability in generated responses.
  4. Human Preference Validation
    • Human annotators consistently prefer outputs from the Dynamic Fusion Model over single-model and static ensemble baselines.

Method Overview

Figure: Dynamic Fusion Model

Figure: Architecture of the Dynamic Fusion Model.

  • Multiple base LLMs generate candidate responses.
  • Weighted Fusion Layer integrates outputs proportionally to their reliability.
  • Hybrid Fusion Layer further refines the combined response for consistency.

Experimental Results

The Dynamic Fusion Model achieves:

  • Politeness: Reduced variance while maintaining high averages
  • Relevance: Consistent alignment with crisis queries
  • Informativeness & Accuracy: Balanced improvements over baselines
  • Human Preference: Annotators show significant preference for DFM outputs

Cross-topic evaluation confirms that DFM consistently outperforms single LLMs and static fusion, especially in balancing trade-offs across dimensions.


Resources


BibTeX

@inproceedings{anik2025dynamic,
  title={Dynamic Fusion Model for Consistent Crisis Response},
  author={Anik, Anirban Saha and Song, Xiaoying and Wang, Bryan and Wang, Elliott and Yarimbas, Bengisu and Hong, Lingzi},
  booktitle={Findings of the Association for Computational Linguistics: EMNLP 2025},
  year={2025}
}

Recommended citation: Anirban Saha Anik, Xiaoying Song, Bryan Wang, Elliott Wang, Bengisu Yarimbas, Lingzi Hong. (2025). "Dynamic Fusion Model for Consistent Crisis Response." Findings of the Association for Computational Linguistics: EMNLP 2025.
Download Paper