Response Data
When Asky monitors AI platforms, each query returns a response. Understanding these responses—their structure, citations, and mention patterns—is key to optimizing your AI search visibility.
How Asky Queries AI Platforms
Unlike most AI visibility platforms that query LLM APIs directly, Asky uses AI agents to simulate real users. This distinction is critical.
Why Not Use APIs?
Many competitors take a shortcut: they query ChatGPT, Claude, or Perplexity through their APIs. The problem? API responses are fundamentally different from what real users see.
| Approach | What You Get | Problem |
|---|---|---|
| API Queries | Raw model output | No web search (or worse version), different citations, different responses |
| Asky’s Agents | Actual user experience | Real responses with live search results |
API responses are flawed because:
- No real-time search: APIs often don’t include the same web search capabilities that users get
- Different system prompts: API calls use different instructions than the user interface
- Missing citations: Perplexity’s API doesn’t return the same citations as the web interface
- Outdated context: APIs may not have the same real-time retrieval as consumer apps
- Model differences: The user interface uses a different (secret) system prompt, affecting the replies
If you’re optimizing based on API responses, you’re optimizing for data that doesn’t reflect what real users see. This leads to wasted effort and misleading metrics.
Asky’s Real User Simulation
Asky takes a different approach:
- AI agents act as real users: Our agents interact with AI platforms exactly as a human would
- Same interface, same results: Queries go through the same web interfaces users access
- Real-time search included: When ChatGPT or Perplexity searches the web, we capture those results
- Accurate citations: We capture the exact citations users see, not simplified API outputs
- True representation: What Asky shows you is what your customers actually see
This means when you optimize based on Asky data, you’re optimizing for real user experiences, not artificial API outputs.
Response Structure
Each captured response contains several data points:
Response Text
The full text of the AI-generated answer. This is exactly what a user would see if they asked the same question.
Timestamp
When the response was captured. Track this to understand how responses evolve over time.
Citations
Citations are the sources AI platforms reference when generating responses. They’re crucial for understanding why certain brands appear in AI answers. See Citations below.
Why Citations Matter
Citations reveal:
-
What sources AI systems trust: If Forbes is cited frequently, getting Forbes coverage matters
-
Content that drives mentions: If a competitor’s blog post is cited, analyze (or have Asky analyze for you!) what makes it citable
-
Your citation presence: Are your pages being cited? If not, why not?
-
Domain authority signals: Which domains appear most frequently?
Brand Mentions
Asky analyzes each response to detect brand mentions and uses an AI agent to classify the sentiment of these mentions.
Mention Metrics
| Metric | Description |
|---|---|
| Mentioned | Boolean - does your brand appear? |
| Mention Count | How many times your brand is mentioned |
| Sentiment | Is this a postive/netural/negative mention? |
Sentiment Analysis
Asky classifies mention sentiment:
| Sentiment | Indicators |
|---|---|
| Positive | Recommendations, praise, “best” associations |
| Neutral | Factual mentions, listings without opinion |
| Negative | Criticisms, warnings, “avoid” associations |
Sentiment score is calculated using smoothed linear dilution.
Analyzing Response Trends
Individual responses are snapshots. Real value comes from tracking trends over time.
What to Track
- Mention frequency: Are you mentioned more or less over time?
- Sentiment shifts: Is sentiment improving or declining?
- Citation patterns: Are new sources emerging?
- Competitor changes: Are competitor mentions changing?
Response Changes
AI responses change because:
- Model updates: AI providers update their models
- Model variance: AI models are non-deterministic, so responses vary
- Content changes: New content enters their search indexes
- Real-time retrieval: Live web searches return different results
- Query interpretation: Same query may be understood differently
Inform Strategy
Response data should inform:
- Content topics: What questions should you answer?
- Content format: What format gets cited (lists, guides, comparisons)?
- Positioning: How are you described vs competitors?
- Gaps: What mentions are missing that should be there?
Next Steps
- Monitor Crawler Logs to ensure AI bots can access your content