Back to ULR overview · Back to main index
| Metric | Value |
|---|---|
| Test accuracy | 50.36% |
| Test F1 score | 0.5445 |
| Hierarchical loss | 0.95487510 |
| P-adic loss (total) | 277.28447099 |
| P-adic loss (mean) | 0.22343632 |
| Prime base | 71 |
| Number of tags (input features) | 2,797 |
| Non-zero parameters | 3,247 / 1,024,068 (99.7% sparse) |
| L1 regularization (C) | 1.0000 |
| Training samples | 4,940 |
| Test samples | 1,241 |
| Agreement | Count | Share | Cost per mistake | Total contribution |
|---|---|---|---|---|
| Exact match | 625 | 50.36% | 0.000000 | 0.000000 |
| p^6 | 3 | 0.24% | 0.000000 | 0.000000 |
| p^5 | 0 | 0.00% | 0.000000 | 0.000000 |
| p^4 | 21 | 1.69% | 0.000000 | 0.000001 |
| p^3 | 72 | 5.80% | 0.000003 | 0.000201 |
| p^2 | 155 | 12.49% | 0.000198 | 0.030748 |
| p^1 | 89 | 7.17% | 0.014085 | 1.253521 |
| p^0 | 276 | 22.24% | 1.000000 | 276.000000 |
L1 (Lasso) regularization promotes sparsity by driving many coefficients to exactly zero. This model uses ALL available tags (2,797) but L1 regularization selects which features are actually used. The number of non-zero parameters (3,247) indicates how many coefficients the model actually uses.