Padjective Tag Hierarchy

Machine learning insights into Shopify product tag organization

Data sourced from cantbuymelove.industrial-linguistics.com powering Shopify taxonomy classification and filtered to taxonomies with at least five products.

Last updated 2026-01-31 19:06 UTC

6,108 Products used
368 Taxonomies covered
9,365 Tags used
24,460 Total tags
2,544 Tag battles

Dataset coverage

Training data spans 6,108 products across 368 taxonomies. Of 24,460 total tags in the dataset, 9,365 tags were used (tags appearing fewer than 5 times were filtered out). 4,184 products were discarded due to missing or sparse taxonomy labels. Explore the full dataset → | View defective taxonomy labels →

Dummy Baseline

Always predicts most common taxonomy (baseline for comparison)

0.6044 Avg p-adic loss
1 Parameter
View model →

Importance-Optimised p-adic Linear Regression

P-adic coefficients assigned to tags to predict taxonomy

0.3702 Avg p-adic loss
784 Avg non-zero coefficients
View model →

Zubarev Regression (UMLLR init)

Stochastic p-adic optimization starting from UMLLR (arXiv:2503.23488)

0.4111 Avg p-adic loss
2,317 Non-zero coefficients
View fold details →

Zubarev Regression (Zeros init)

Stochastic p-adic optimization starting from zeros (arXiv:2503.23488)

0.4404 Avg p-adic loss
2,407 Non-zero coefficients
View fold details →

Zubarev Mahler-1 (UMLLR init)

Mahler affine basis (degree 1) with UMLLR initialization

0.4148 Avg p-adic loss
2,208 Non-zero coefficients
View fold details →

Zubarev Mahler-2 (UMLLR init)

Mahler quadratic basis (degree 2) with UMLLR initialization

0.4169 Avg p-adic loss
2,213 Non-zero coefficients
View fold details →

Unconstrained Logistic Regression

L1-regularized model using ALL tags

0.2132 Avg p-adic loss
3,110 Non-zero params
View model →

Decision Tree

Unconstrained tree using ALL tags

0.1889 Avg p-adic loss
25,971 Effective params
View model →

Unconstrained Neural Network

L1-regularized NN with weight pruning

0.2115 Avg p-adic loss
34,081 Non-zero params
View model →

Parameter Constrained Neural Network

Neural network predicting taxonomy from tags

0.6568 Avg p-adic loss
864 Avg input weights
View model →

Parameter Constrained Logistic Regression

Logistic regression model predicting Shopify taxonomy from tags

0.6953 Avg p-adic loss
11,667 Avg parameters
View model →

ELO-Inspired Rankings

Battle-tested tag hierarchy from product title positions

2,544 Tag battles
View rankings →

Taxonomy distribution

Taxonomy class distribution
Distribution of products across the most common taxonomy classes

Top 10 taxonomy classes

Taxonomy IDNamePathSamplesShare
gid://shopify/TaxonomyCategory/aa-1-13-8Apparel & Accessories > Clothing > Clothing Tops > T-Shirts1.1.13.83045.0%
gid://shopify/TaxonomyCategory/fb-2-3-2Food, Beverages & Tobacco > Food Items > Candy & Chocolate > Chocolate9.2.3.22494.1%
gid://shopify/TaxonomyCategory/aa-1-4Apparel & Accessories > Clothing > Dresses1.1.41422.3%
gid://shopify/TaxonomyCategory/aa-6-8Apparel & Accessories > Jewelry > Necklaces1.6.81422.3%
gid://shopify/TaxonomyCategory/ae-2-1Arts & Entertainment > Hobbies & Creative Arts > Arts & Crafts3.2.11302.1%
gid://shopify/TaxonomyCategory/aa-6-6Apparel & Accessories > Jewelry > Earrings1.6.61181.9%
gid://shopify/TaxonomyCategory/hg-9Home & Garden > Household Appliances14.91051.7%
gid://shopify/TaxonomyCategory/ha-6-2-5Hardware > Hardware Accessories > Cabinet Hardware > Cabinet Knobs & Handles12.6.2.5891.5%
gid://shopify/TaxonomyCategory/lbLuggage & Bags15811.3%
gid://shopify/TaxonomyCategory/ae-2-2Arts & Entertainment > Hobbies & Creative Arts > Collectibles3.2.2791.3%

Tags with strongest signal

TagTop taxonomyWeightMax |weight|
FRAMED ARTWORK3.2.25.81505.8150
BLUE14.11.10.4.35.49045.4904
WOMENS1.8.75.43415.4341
ACCESSORIES1.2.45.24835.2483
GIFT14.15.1.95.09485.0948
WHOLESALE14.11.10.7.95.05445.0544
VEGAN13.3.5.25.03245.0324
KIDS13.1.204.95324.9532
NEW ARRIVALS13.3.2.8.44.84094.8409
PLUS SIZE1.1.1.1.54.80314.8031

Historical Performance Trends

Tracking model performance and dataset growth over time. Lower p-adic loss indicates better predictions.

Historical model performance trends
Model performance vs number of products
Model Slope (per product) Intercept p-value
Importance-Optimised p-adic LR0.0000120.29910.30058.13e-08
PCLR0.0000820.28350.75143.32e-26
PCNN0.0000820.24580.84718.97e-35
ULR0.0000090.17660.23552.00e-04
UNN0.0000270.06710.74731.50e-16
Decision Tree0.0000090.14010.29733.52e-05
Zubarev (UMLLR)0.0000200.30370.81254.01e-16
Zubarev (zeros)0.0000270.29360.83313.85e-17
Zubarev (M1)0.0000070.37440.36322.41e-05
Zubarev (M2)0.0000110.35470.53953.08e-08
Dummy Baseline-0.0000731.09400.58024.63e-14

Extrapolation Analysis: When Will Importance-Optimised p-adic LR Outperform Other Models?

Based on current regression trends, we can extrapolate when Importance-Optimised p-adic LR will achieve better performance (lower p-adic loss) than other models as the dataset grows. The confidence intervals are calculated using bootstrap resampling (n=1000).

Model Crossover Point
(products)
95% Confidence Interval Probability Estimated Date
UNN (Unconstrained Neural Networks)14,98311,806 - 21,237 (95% CI, σ=2,567)>95%2026-07-09 (±uncertain, R²=0.997, growth=56.3/product/day)

Statistical Notes: The crossover points are calculated by finding where the regression lines intersect. The 95% confidence intervals are derived from bootstrap resampling of the regression parameters. The probability estimates indicate the likelihood that the crossover will occur given the current trends. Date predictions are based on linear extrapolation of dataset growth and should be interpreted with caution.

Model performance vs number of distinct tags
Model Slope (per tag) Intercept p-value
Importance-Optimised p-adic LR0.0000140.24240.33311.12e-08
PCLR0.000090-0.06970.73554.17e-25
PCNN0.000090-0.10650.82343.11e-32
ULR0.0000090.14420.26586.61e-05
UNN0.000027-0.02560.76612.15e-17
Decision Tree0.0000090.10690.32711.16e-05
Zubarev (UMLLR)0.0000210.22760.84488.96e-18
Zubarev (zeros)0.0000280.19280.85094.01e-18
Zubarev (M1)0.0000080.34650.37081.87e-05
Zubarev (M2)0.0000110.31610.53753.36e-08
Dummy Baseline-0.0000781.38490.60685.22e-15

Extrapolation Analysis: When Will Importance-Optimised p-adic LR Outperform Other Models?

Based on current regression trends, we can extrapolate when Importance-Optimised p-adic LR will achieve better performance (lower p-adic loss) than other models as the dataset grows. The confidence intervals are calculated using bootstrap resampling (n=1000).

Model Crossover Point
(tags)
95% Confidence Interval Probability Estimated Date
UNN (Unconstrained Neural Networks)20,22216,216 - 29,699 (95% CI, σ=3,730)>95%2026-09-04 (±uncertain, R²=0.993, growth=50.4/tag/day)

Statistical Notes: The crossover points are calculated by finding where the regression lines intersect. The 95% confidence intervals are derived from bootstrap resampling of the regression parameters. The probability estimates indicate the likelihood that the crossover will occur given the current trends. Date predictions are based on linear extrapolation of dataset growth and should be interpreted with caution.

Model complexity vs performance (parameter count vs p-adic loss)
Parameter count (log scale) vs p-adic loss. Sparse models use fewer non-zero parameters.

Regression: p-adic loss = slope × log₁₀(params) + intercept

Line Slope Intercept p-value Significant? n
With Dummy-0.06980.64740.23010.1354No11
Without Dummy-0.12230.83960.16160.2495No10
Unconstrained models: complexity vs performance (log-log scale)
Unconstrained models only (no PCLR/PCNN). Both axes on log scale.

Regression: log₁₀(loss) = slope × log₁₀(params) + intercept

Slope Intercept p-value Significant? n
-0.1108 -0.2041 0.9062 0.0125 Yes 5
Model performance trajectory over time
Arrows show how each model's complexity and performance have changed over time.