This research develops explainable AI systems to detect early signals of ideological extremism and potential violence in online communications. By integrating social science and machine learning, the project produces interpretable threat assessments for prevention efforts. The framework also extends to healthcare, including rare disease detection using explainable AI models.

Traditional neural networks are powerful but difficult to interpret and vulnerable to small input changes. This research develops wavelet-based neural networks with provable stability guarantees, extending the scattering transform to texture modeling. The approach reduces feature complexity while improving interpretability, enabling more reliable and mathematically explainable AI systems.

This research develops human–AI methods to prioritize massive data streams, especially in public health. By combining expert expectations with extreme value theory, it ranks events by contextual importance, reducing alert overload. Deployed nationally, the approach triaged thousands of outbreaks and data-quality issues, making big data interpretable, actionable, and life-saving decisions.