Back to ULR overview · Back to main index
| Metric | Value |
|---|---|
| Test accuracy | 50.73% |
| Test F1 score | 0.5538 |
| Hierarchical loss | 0.90412919 |
| P-adic loss (total) | 426.47836648 |
| P-adic loss (mean) | 0.23216024 |
| Prime base | 79 |
| Number of tags (input features) | 3,664 |
| Non-zero parameters | 4,610 / 1,792,185 (99.7% sparse) |
| L1 regularization (C) | 1.0000 |
| Training samples | 7,273 |
| Test samples | 1,837 |
| Agreement | Count | Share | Cost per mistake | Total contribution |
|---|---|---|---|---|
| Exact match | 932 | 50.73% | 0.000000 | 0.000000 |
| p^6 | 3 | 0.16% | 0.000000 | 0.000000 |
| p^5 | 1 | 0.05% | 0.000000 | 0.000000 |
| p^4 | 26 | 1.42% | 0.000000 | 0.000001 |
| p^3 | 117 | 6.37% | 0.000002 | 0.000237 |
| p^2 | 219 | 11.92% | 0.000160 | 0.035091 |
| p^1 | 114 | 6.21% | 0.012658 | 1.443038 |
| p^0 | 425 | 23.14% | 1.000000 | 425.000000 |
L1 (Lasso) regularization promotes sparsity by driving many coefficients to exactly zero. This model uses ALL available tags (3,664) but L1 regularization selects which features are actually used. The number of non-zero parameters (4,610) indicates how many coefficients the model actually uses.