This research develops explainable AI systems to detect early signals of ideological extremism and potential violence in online communications. By integrating social science and machine learning, the project produces interpretable threat assessments for prevention efforts. The framework also extends to healthcare, including rare disease detection using explainable AI models.