Back to ULR overview · Back to main index
| Metric | Value |
|---|---|
| Test accuracy | 49.76% |
| Test F1 score | 0.5374 |
| Hierarchical loss | 0.90200632 |
| P-adic loss (total) | 459.72716865 |
| P-adic loss (mean) | 0.24944502 |
| Prime base | 79 |
| Number of tags (input features) | 3,664 |
| Non-zero parameters | 4,430 / 1,792,185 (99.8% sparse) |
| L1 regularization (C) | 1.0000 |
| Training samples | 7,267 |
| Test samples | 1,843 |
| Agreement | Count | Share | Cost per mistake | Total contribution |
|---|---|---|---|---|
| Exact match | 917 | 49.76% | 0.000000 | 0.000000 |
| p^6 | 3 | 0.16% | 0.000000 | 0.000000 |
| p^5 | 0 | 0.00% | 0.000000 | 0.000000 |
| p^4 | 40 | 2.17% | 0.000000 | 0.000001 |
| p^3 | 99 | 5.37% | 0.000002 | 0.000201 |
| p^2 | 192 | 10.42% | 0.000160 | 0.030764 |
| p^1 | 134 | 7.27% | 0.012658 | 1.696203 |
| p^0 | 458 | 24.85% | 1.000000 | 458.000000 |
L1 (Lasso) regularization promotes sparsity by driving many coefficients to exactly zero. This model uses ALL available tags (3,664) but L1 regularization selects which features are actually used. The number of non-zero parameters (4,430) indicates how many coefficients the model actually uses.