Padjective Tag Hierarchy

Machine learning insights into Shopify product tag organization

Data sourced from cantbuymelove.industrial-linguistics.com powering Shopify taxonomy classification and filtered to taxonomies with at least five products.

Last updated 2026-03-19 19:33 UTC

9,110 Products used
493 Taxonomies covered
11,353 Tags used
29,567 Total tags
3,753 Tag battles

Dataset coverage

Training data spans 9,110 products across 493 taxonomies. Of 29,567 total tags in the dataset, 11,353 tags were used (tags appearing fewer than 5 times were filtered out). 5,490 products were discarded due to missing or sparse taxonomy labels. Explore the full dataset → | View defective taxonomy labels →

Dummy Baseline

Always predicts most common taxonomy (baseline for comparison)

0.5825 Avg p-adic loss
1 Parameter
View model →

Importance-Optimised p-adic Linear Regression

P-adic coefficients assigned to tags to predict taxonomy

0.3778 Avg p-adic loss
1,103 Avg non-zero coefficients
View model →

Level-wise Logistic Regression

Hierarchy-aware top-down classifier that always emits a valid taxonomy path

0.1008 Avg p-adic loss
83.01% Prefix-2 accuracy
132,415 Non-zero params
View model →

Zubarev Regression (UMLLR init)

Stochastic p-adic optimization starting from UMLLR (arXiv:2503.23488)

0.4273 Avg p-adic loss
2,901 Non-zero coefficients
View fold details →

Zubarev Regression (Zeros init)

Stochastic p-adic optimization starting from zeros (arXiv:2503.23488)

0.4586 Avg p-adic loss
3,048 Non-zero coefficients
View fold details →

Zubarev Mahler-1 (UMLLR init)

Mahler affine basis (degree 1) with UMLLR initialization

0.4253 Avg p-adic loss
2,727 Non-zero coefficients
View fold details →

Zubarev Mahler-2 (UMLLR init)

Mahler quadratic basis (degree 2) with UMLLR initialization

0.4261 Avg p-adic loss
2,732 Non-zero coefficients
View fold details →

Unconstrained Logistic Regression

L1-regularized model using ALL tags

0.2416 Avg p-adic loss
4,552 Non-zero params
View model →

Decision Tree

Unconstrained tree using ALL tags

0.2081 Avg p-adic loss
40,902 Effective params
View model →

Unconstrained Neural Network

L1-regularized NN with weight pruning

0.2279 Avg p-adic loss
27,154 Non-zero params
View model →

Parameter Constrained Neural Network

Neural network predicting taxonomy from tags

0.6923 Avg p-adic loss
864 Avg input weights
View model →

Parameter Constrained Logistic Regression

Logistic regression model predicting Shopify taxonomy from tags

0.6650 Avg p-adic loss
15,661 Avg parameters
View model →

ELO-Inspired Rankings

Battle-tested tag hierarchy from product title positions

3,753 Tag battles
View rankings →

Taxonomy distribution

Taxonomy class distribution
Distribution of products across the most common taxonomy classes

Top 10 taxonomy classes

Taxonomy IDNamePathSamplesShare
gid://shopify/TaxonomyCategory/btBaby & Toddler4941.0%
gid://shopify/TaxonomyCategory/lbLuggage & Bags15820.9%
gid://shopify/TaxonomyCategory/buBundles5300.3%
gid://shopify/TaxonomyCategory/naUncategorized25170.2%
gid://shopify/TaxonomyCategory/sgSporting Goods23130.1%
gid://shopify/TaxonomyCategory/osOffice Supplies18120.1%
gid://shopify/TaxonomyCategory/gcGift Cards11110.1%
gid://shopify/TaxonomyCategory/hgHome & Garden1480.1%
gid://shopify/TaxonomyCategory/maMature1670.1%
gid://shopify/TaxonomyCategory/fbFood, Beverages & Tobacco960.1%

Tags with strongest signal

TagTop taxonomyWeightMax |weight|
FPM233.98763.9876

Historical Performance Trends

Tracking model performance and dataset growth over time. Lower p-adic loss indicates better predictions.

Historical model performance trends
Model performance vs number of products
Model Slope (per product) Intercept p-value
Importance-Optimised p-adic LR0.0000090.30810.49921.70e-20
PCLR0.0000440.40260.65221.90e-30
PCNN0.0000440.36580.71962.55e-36
ULR0.0000060.18710.48202.25e-15
UNN0.0000110.13490.64775.11e-23
Decision Tree0.0000050.15680.43413.92e-13
Zubarev (UMLLR)0.0000080.36060.67582.99e-22
Zubarev (zeros)0.0000130.35890.78646.74e-30
Zubarev (M1)0.0000030.39360.45311.25e-12
Zubarev (M2)0.0000050.38050.62381.61e-19
Dummy Baseline-0.0000561.02470.77125.03e-37

Extrapolation Analysis: When Will Importance-Optimised p-adic LR Outperform Other Models?

Based on current regression trends, we can extrapolate when Importance-Optimised p-adic LR will achieve better performance (lower p-adic loss) than other models as the dataset grows. The confidence intervals are calculated using bootstrap resampling (n=1000).

Model Crossover Point
(products)
95% Confidence Interval Probability Estimated Date
UNN (Unconstrained Neural Networks)74,49237,619 - 469,203 (95% CI, σ=1,106,117)>95%2029-02-21 (±uncertain, R²=0.996, growth=61.1/product/day)

Statistical Notes: The crossover points are calculated by finding where the regression lines intersect. The 95% confidence intervals are derived from bootstrap resampling of the regression parameters. The probability estimates indicate the likelihood that the crossover will occur given the current trends. Date predictions are based on linear extrapolation of dataset growth and should be interpreted with caution.

Model performance vs number of distinct tags
Model Slope (per tag) Intercept p-value
Importance-Optimised p-adic LR0.0000120.25790.52963.27e-22
PCLR0.0000550.16390.68295.74e-33
PCNN0.0000550.12790.75101.49e-39
ULR0.0000080.15390.50572.31e-16
UNN0.0000150.06520.72514.20e-28
Decision Tree0.0000060.12810.46612.54e-14
Zubarev (UMLLR)0.0000110.30920.74511.18e-26
Zubarev (zeros)0.0000180.27580.83934.24e-35
Zubarev (M1)0.0000050.37190.48519.59e-14
Zubarev (M2)0.0000070.34760.65902.54e-21
Dummy Baseline-0.0000701.32380.80321.25e-40

Extrapolation Analysis: When Will Importance-Optimised p-adic LR Outperform Other Models?

Based on current regression trends, we can extrapolate when Importance-Optimised p-adic LR will achieve better performance (lower p-adic loss) than other models as the dataset grows. The confidence intervals are calculated using bootstrap resampling (n=1000).

Model Crossover Point
(tags)
95% Confidence Interval Probability Estimated Date
UNN (Unconstrained Neural Networks)53,15132,507 - 249,219 (95% CI, σ=217,546)>95%2028-07-03 (±uncertain, R²=0.997, growth=49.6/tag/day)

Statistical Notes: The crossover points are calculated by finding where the regression lines intersect. The 95% confidence intervals are derived from bootstrap resampling of the regression parameters. The probability estimates indicate the likelihood that the crossover will occur given the current trends. Date predictions are based on linear extrapolation of dataset growth and should be interpreted with caution.

Model complexity vs performance (parameter count vs p-adic loss)
Both axes use log scale. The red line is the fixed parsimoniousness baseline rather than a fitted regression.

Why parsimony matters. The question here is not just which model has the lowest loss, but which model gets good p-adic loss with the fewest effective parameters. That is exactly where the smaller p-adic models are interesting.

Where this baseline came from. The original score came from a log-log regression on model size versus loss, rounded to -0.1 × log₁₀(params) - 0.2. Looking across historical snapshots, those scores drifted as the dataset covered more taxonomies, so the current baseline adds + 0.3 × log₁₀(taxonomies / 1,000) to keep comparisons stable as the benchmark grows. For readability, we also re-centre the displayed score by dropping the old constant offset; that keeps the current tables mostly positive without changing the relative comparisons.

Parsimoniousness baseline: log₁₀(loss) = -0.1 × log₁₀(params) + 0.3 × log₁₀(taxonomies / 1,000)
Current snapshot taxonomies: 493
Parsimony score = baseline log₁₀(loss) − observed log₁₀(loss). Positive means better than baseline.

Model Params Loss log₁₀(params) log₁₀(loss) Baseline log₁₀(loss) Parsimony score
Level-wise Logistic132,4150.10085.1219-0.9966-0.6043+0.3923
ULR4,5520.24163.6582-0.6170-0.4580+0.1590
Dummy10.58250.0000-0.2347-0.0921+0.1426
Decision Tree40,9020.20814.6117-0.6818-0.5533+0.1285
UNN27,1540.22794.4338-0.6423-0.5355+0.1067
Importance-Optimised1,1030.37783.0427-0.4227-0.3964+0.0263
Zubarev (M1)2,7270.42533.4357-0.3714-0.4357-0.0644
Zubarev (M2)2,7320.42613.4365-0.3705-0.4358-0.0653
Zubarev (UMLLR)2,9010.42733.4626-0.3693-0.4384-0.0691
Zubarev (zeros)3,0480.45863.4840-0.3385-0.4406-0.1020
PCNN8640.69232.9365-0.1597-0.3858-0.2261
PCLR15,6610.66504.1948-0.1772-0.5116-0.3344
Historical parsimony score stability
Left: parsimony score versus dataset size. Right: score distribution across historical snapshots. Positive means better than the taxonomy-adjusted baseline.
Model Snapshots Mean score Std dev Span Latest score Latest products
Unconstrained Logistic Regression with L198+0.16890.02230.1083+0.15909,110
Dummy Baseline112-0.00130.12160.3180+0.14269,110
Decision Tree65+0.15550.01650.0720+0.12859,110
Unconstrained Neural Network with L196+0.10420.03320.1631+0.10679,110
Importance-Optimised $p$-adic Linear Regression65+0.01440.00910.0402+0.02639,110
Zubarev (UMLLR init)86-0.07810.00950.0465-0.06929,110
PCNN96-0.24600.01540.0646-0.22619,110
PCLR96-0.38060.02570.1880-0.33449,110

Smaller standard deviation and span mean a model’s parsimoniousness is more stable as the dataset grows.

Unconstrained models: complexity vs performance (log-log scale)
Unconstrained models only (no PCLR/PCNN). Both axes on log scale.

Regression: log₁₀(loss) = slope × log₁₀(params) + intercept

Slope Intercept p-value Significant? n
-0.0967 -0.2151 0.9248 0.0090 Yes 5
Model performance trajectory over time
Arrows show how each model's complexity and performance have changed over time.