Back to ULR overview · Back to main index
| Metric | Value |
|---|---|
| Test accuracy | 54.27% |
| Test F1 score | 0.5891 |
| Hierarchical loss | 0.95842652 |
| P-adic loss (total) | 147.49644213 |
| P-adic loss (mean) | 0.21345361 |
| Prime base | 71 |
| Number of tags (input features) | 1,640 |
| Non-zero parameters | 1,648 / 357,738 (99.5% sparse) |
| L1 regularization (C) | 1.0000 |
| Training samples | 2,732 |
| Test samples | 691 |
| Agreement | Count | Share | Cost per mistake | Total contribution |
|---|---|---|---|---|
| Exact match | 375 | 54.27% | 0.000000 | 0.000000 |
| p^4 | 7 | 1.01% | 0.000000 | 0.000000 |
| p^3 | 40 | 5.79% | 0.000003 | 0.000112 |
| p^2 | 88 | 12.74% | 0.000198 | 0.017457 |
| p^1 | 34 | 4.92% | 0.014085 | 0.478873 |
| p^0 | 147 | 21.27% | 1.000000 | 147.000000 |
L1 (Lasso) regularization promotes sparsity by driving many coefficients to exactly zero. This model uses ALL available tags (1,640) but L1 regularization selects which features are actually used. The number of non-zero parameters (1,648) indicates how many coefficients the model actually uses.