Signal vs Noise: Measuring Which Product Attributes Actually Move AI Recommendations (April 2026 Update)
By Steve Merrill, Founder of WRKNG Digital — April 4, 2026
Last month, we fed 600 product listings into ChatGPT, Perplexity, and Google AI Mode using identical prompts. Same category. Same price range. Different attribute structures. The recommendation rates varied by over 40 percentage points between the best-performing and worst-performing listings.
That gap isn't random. There are specific signals these models are looking for, and most Shopify stores don't have them. The question worth asking: which attributes actually move the needle, and which ones are just noise?
This post breaks down the experiment, the results, and what to fix first.
Why Are Some Products Recommended by AI and Others Completely Ignored?
AI assistants pull from structured signals in your product data. When those signals are weak or absent, the model skips your product entirely. Not because your product is inferior, but because it doesn't have enough information to make a confident recommendation.
Think about how these systems work. When someone asks ChatGPT "what's the best moisture-wicking running shirt under $60," the model needs to match that query against product data it's indexed or retrieved. If your shirt's description says "great for workouts," that's not enough to close the match. "87% recycled polyester, moisture-wicking, UPF 30, runs true to size" is.
The models aren't reading your copy the way a human does. They're extracting structured facts and matching them against user intent. Vague copy fails that match. Specific data passes it.
According to Google's product structured data documentation, well-specified product markup including material, size system, color, and condition dramatically increases eligibility for rich results and AI-powered shopping features. The same principle applies across every AI platform pulling product data.
What Does the Attribute Data Actually Show?
Here's what we found across 600 products in our April 2026 test set. We grouped attributes into two buckets: high-signal (moved AI citation rates by 15+ percentage points when added or improved) and low-signal (no meaningful change).
High-signal attributes:
- Material composition with percentages (e.g., "78% merino wool, 22% nylon" vs "wool blend"), 34-point lift on average across platforms
- Use-case specificity in the title (e.g., "Hiking Sock for Cold Weather, Cushioned, Crew Height" vs "Hiking Sock"), 28-point lift
- Measurement data (dimensions, weight, capacity in specific units), 22-point lift
- Review count above 50 (products with fewer than 50 reviews were rarely cited on Perplexity even when other attributes were strong), 19-point lift at the 50+ threshold
- Price specificity (exact price in the feed vs "from $X"), 16-point lift
Low-signal attributes (didn't move numbers):
- Brand story copy in the description
- General lifestyle claims ("perfect for everyday wear")
- Bullet point count (more bullets didn't help if content was vague)
- SEO meta description changes (no impact on AI citation rate)
None of this is really surprising once you think about how the models reason. Brand story doesn't help an AI compare products. Dimensional data does.
A 2025 Baymard Institute study on product page content found that shoppers abandon pages when specific attributes are missing (material, fit, dimensions). AI assistants fail the recommendation entirely. Different outcome, same root cause: missing specifics.
How Do You Run This Test on Your Own Store?
I've run versions of this experiment across 40+ stores. The setup is simpler than most people expect. You don't need any special tools. You need a spreadsheet, three browser windows, and about three hours.
Here are the six steps:
Step 1: Select your test set
Pick 10 to 15 products across at least two price tiers in the same category. Choose products that aren't showing up in AI recommendations right now. You need a baseline of zero (or near-zero) AI citations to measure movement. Products already getting cited make it harder to isolate what's working.
Step 2: Write 5 natural shopping prompts per category
These should mirror how a real shopper talks to an AI assistant. Think: "What's the best [product] for [use case] under $[price]?" or "Compare the top [product] options for [specific need]." Keep prompts consistent. You'll run them again in step 5, word for word.
Step 3: Run baseline queries across three AI platforms
Query ChatGPT (with shopping mode enabled), Perplexity, and Google AI Mode with each prompt. Record which products get recommended, what language the AI uses, and whether your products appear. This is your control group. Screenshot everything.
Step 4: Isolate one attribute and create a variant
Change a single attribute on 5 of your 10 test products. One thing at a time: material specificity, use-case language in the title, or measurement data. Push the change live in your Shopify product feed and wait 24 to 48 hours for indexing before re-testing.
Step 5: Re-run the same queries
Use the exact same prompts from step 2. Run them in fresh browser sessions with no history, no personalization. Record which products now appear, how often, and in what position. Compare against your baseline screenshots.
Step 6: Score and roll out winners
Calculate recommendation frequency for each variant. Any attribute that lifts citation rate by 15 percentage points or more is worth rolling out across your full catalog. Start with the highest-traffic, highest-margin products first. Not the whole store at once.
Which Attributes Moved the Numbers the Most?
Material composition was the biggest single winner in our test set. By a wide margin. When we added percentage-based material breakdowns (e.g., "60% cotton, 35% polyester, 5% elastane") to product descriptions that previously just said "cotton blend," we saw average citation lifts of 34 points across ChatGPT and Perplexity.
Why? Because material is a queryable fact. When someone asks for "a cotton-rich dress shirt," the model needs to confirm your shirt qualifies. "Cotton blend" is ambiguous. "60% cotton" isn't.
Use-case language in the product title was second. A title like "Insulated Water Bottle, 32 oz, Wide Mouth, Leak-Proof, Dishwasher Safe" outperformed "Insulated Water Bottle" by 28 points in recommendation frequency. Every attribute in that title answers a specific follow-up question a shopper might ask.
Review count surprised us a little. We expected quality signals (average rating) to matter more than quantity. They didn't. Perplexity in particular showed a hard bias toward products with 50+ reviews regardless of rating. A 4.2-star product with 80 reviews consistently outperformed a 4.8-star product with 12 reviews. That's not a Shopify problem. That's a go-to-market problem for newer stores.
According to a 2025 analysis by Searchmetrics on AI-generated shopping answers, product recommendation engines in LLM-based assistants weight review volume as a proxy for demand confidence. Fewer reviews means less certainty. Less certainty means the model picks a safer recommendation.
What Should You Fix Before Anything Else?
The data is clear. Fix material composition first. It's the fastest attribute to update, it has the biggest measured impact, and it requires zero design changes. Open your Shopify product editor, go to your descriptions, and add a materials section with exact percentages.
If you're in apparel, footwear, or home goods, this is a 30-minute task that can move you from invisible to cited on multiple AI platforms. Not great to wait on that.
After materials, work through your titles. Audit your top 50 products for title specificity. If the title doesn't include at least two queryable facts beyond the product name (size, material, use case, capacity, feature), rewrite it. Think like a shopper querying an AI, not like someone writing for a Google title tag.
Third priority: schema markup. If you're not passing Product schema with material, color, size, and offers fields, you're leaving structured data on the table. Google's AI Mode and ChatGPT's shopping features both rely on schema to confirm attributes at scale. Your Shopify theme probably outputs some schema automatically, but most themes don't include material or use-case fields. You'll need to add those manually or through a metafield-based customization.
One thing I keep seeing across audits: stores that invested in SEO copy (long, keyword-stuffed descriptions) often perform worse on AI recommendations than stores with shorter, more structured product data. The model doesn't need a 400-word story about your brand's philosophy. It needs to know the thread count.
Frequently Asked Questions
Does updating product descriptions actually change how AI recommends my store?
Yes, measurably. In our April 2026 test across 600 products, adding material composition percentages to descriptions lifted AI citation rates by an average of 34 percentage points on ChatGPT and Perplexity. The effect was visible within 48 to 72 hours of updating the live feed. AI assistants are continuously pulling and indexing product data, so changes propagate faster than traditional SEO updates.
Do all AI shopping platforms weight the same attributes?
Not exactly. ChatGPT shopping mode showed the strongest response to material specificity and use-case language in product titles. Perplexity weighted review count more heavily than the other platforms. Google AI Mode responded most strongly to structured data markup (schema) in addition to description quality. If you have to focus on, material composition and title specificity improved performance across all three platforms in our testing.
How often should I re-run this kind of attribute test?
At minimum, quarterly. The AI shopping platforms update their ranking and retrieval logic frequently. What worked in Q4 2025 may not work the same way in Q2 2026. We run a version of this experiment every 6 to 8 weeks across client accounts. The attribute weights shift, but specificity consistently outperforms vagueness. That part hasn't changed.
My products already have detailed descriptions. Why aren't they showing up?
Long descriptions don't automatically mean specific descriptions. Most product copy is written for human readers browsing a page, not for AI retrieval. Check whether your descriptions include percentage-based material data, exact measurements, and use-case language that matches how shoppers phrase shopping queries. If those elements are missing, the description length won't help. Run the test in this post to find your actual gaps.
Is product schema markup necessary if my descriptions are already strong?
For ChatGPT and Perplexity: less critical (they pull from live product pages and feeds). For Google AI Mode: yes, schema markup matters independently of description quality. Google's system uses structured data to confirm and categorize attributes at scale. Strong descriptions help, but missing schema means Google's AI features can't reliably extract and verify the data. Both matter on Google. On ChatGPT and Perplexity, start with descriptions and titles.
Ready to See Where Your Store Actually Stands?
We audit Shopify stores for AI visibility and identify exactly which product attributes are blocking recommendations across ChatGPT, Perplexity, Google AI Mode, and the growing ecosystem of AI shopping agents. If you want to know your score and what to fix first, start here.

