113941 -

: Explaining the decision-making process of "black-box" Deep Learning (DL) models used in text classification , particularly within the biomedical domain.

: The paper introduces Confident Itemsets Explanation (CIE) , a model-agnostic method that identifies sets of features (words or tokens) that strongly influence a model's prediction. 113941

: These models often require large datasets and can be sensitive to "adversarial noise" (small character-level changes that fool the AI). : Explaining the decision-making process of "black-box" Deep

The identifier refers to a specific research article titled "Post-hoc explanation of black-box classifiers using confident itemsets" , published in the journal Expert Systems with Applications (Volume 165, March 2021). Key Details of the Research Authors : Milad Moradi and Matthias Samwald. The identifier refers to a specific research article

: Common architectures include Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) used to model complex relationships in text data.