Back to ULR overview · Back to main index
| Metric | Value |
|---|---|
| Test accuracy | 48.40% |
| Test F1 score | 0.5305 |
| Hierarchical loss | 0.89794691 |
| P-adic loss (total) | 463.37787172 |
| P-adic loss (mean) | 0.25629307 |
| Prime base | 79 |
| Number of tags (input features) | 3,664 |
| Non-zero parameters | 4,666 / 1,803,180 (99.7% sparse) |
| L1 regularization (C) | 1.0000 |
| Training samples | 7,302 |
| Test samples | 1,808 |
| Agreement | Count | Share | Cost per mistake | Total contribution |
|---|---|---|---|---|
| Exact match | 875 | 48.40% | 0.000000 | 0.000000 |
| p^4 | 39 | 2.16% | 0.000000 | 0.000001 |
| p^3 | 102 | 5.64% | 0.000002 | 0.000207 |
| p^2 | 224 | 12.39% | 0.000160 | 0.035892 |
| p^1 | 106 | 5.86% | 0.012658 | 1.341772 |
| p^0 | 462 | 25.55% | 1.000000 | 462.000000 |
L1 (Lasso) regularization promotes sparsity by driving many coefficients to exactly zero. This model uses ALL available tags (3,664) but L1 regularization selects which features are actually used. The number of non-zero parameters (4,666) indicates how many coefficients the model actually uses.