Multi-Task Architecture

Standard classification networks map inputs directly to a single objective. We implemented a Multi-Task Architecture where the shared convolutional backbone concurrently minimizes binary classification loss (LBCEL_{BCE}) and multi-class study-type loss (LCEL_{CE}).

Input Radiograph XR256×256×3X \in \mathbb{R}^{256 \times 256 \times 3}
Shared Convolutional Backbone
Binary Core
BCE Loss
Multi-class Auxiliary
CE Loss
Empirically Derived Loss Function
Ltotal=3LBCE(y^abn,yabn)+1LCE(y^bp,ybp)L_{total} = 3 \cdot L_{BCE}(\hat{y}_{abn}, y_{abn}) + 1 \cdot L_{CE}(\hat{y}_{bp}, y_{bp})

The 3:1 weighting minimizes auxiliary override.

Empirical Impact (Test Set)
Cohen's Kappa (κ)
0.729+2.7%
Baseline: 0.702
Accuracy
86%
Baseline: 85%
F1 Score
0.855
Baseline: 0.840