Brand Explorer
Brand Explorer is Writesonic's instant brand intelligence feature that gives you an immediate, no-setup-required view of how any brand — yours or a competitor's — appears across AI platforms such as ChatGPT, Claude, Perplexity, and Gemini.
It continuously analyzes millions of prompts from Writesonic's proprietary and acquired dataset, covering the US market and beyond, to reveal where and how a brand is mentioned in AI-generated responses. Unlike your project-based tracking, Brand Explorer is a research and discovery tool — the fastest way to benchmark brand visibility before setting up custom tracking.
/image
🎯 What Brand Explorer is best for
- Instantly check how any brand appears in AI responses — no configuration needed
- Benchmark your AI visibility against competitors side by side
- Discover which prompts, topics, and industries drive the most AI mentions
- Identify citation sources and content gaps before running deeper analysis
- Use as a starting point to inform your custom prompt and topic strategy in GEO projects
2. Getting Started with Brand Explorer
Brand Explorer is accessible from the left navigation under the Research section. Running an analysis is straightforward and takes seconds.
How to Run an Analysis
- Brand name / Domain (Required) — Enter the brand name or domain you want to analyze. As you type, Brand Explorer suggests matching brands from its index so you can select quickly.
- Add Competitors (Optional) — Add up to several competitor brands to run a side-by-side comparison within the same analysis.
- Industry / Topic / Keyword (Optional) — Narrow the analysis to a specific industry, topic, or keyword to focus results on the most relevant AI prompts for your use case.
Once you click Analyze, Brand Explorer processes the query against its dataset and returns a full report within seconds. Each analysis costs one search credit (your remaining credit count is shown at the bottom of the form). Previously run reports are saved under Your Reports for easy reference.
3. Navigating the Brand Explorer Report
Once an analysis is complete, the report is organized across four tabs, each giving a different lens on brand performance in AI.
| Tab | What it shows |
|---|---|
| Dashboard | High-level performance summary with key metrics, a competitive leaderboard, topic-level breakdowns, and top prompts driving visibility. |
| AI Answers | The actual AI-generated responses analyzed, with filters for mentions, citations, rank, intent, and more. |
| Citations | Content pages and domains that AI platforms are citing as sources in their answers. |
| Prompts | The full list of prompts analyzed — showing how each one performs across visibility, rank, citation share, and intent. |
4. Dashboard Tab
The Dashboard is your top-level performance view. It surfaces the three most important brand health metrics, a competitive leaderboard, topic-level competitor comparisons, and your highest-ranked prompts — all in one place.
4.1 Core Metrics
Three headline metrics sit at the top of the Dashboard. Each metric is calculated against the set of analyzed prompts and compared relative to competitor brands.
| Metric | Definition |
|---|---|
| Share of Voice | The percentage of AI answers where this brand appears compared to all competitors combined. Indicates how dominant a brand is in the competitive conversation across AI platforms. |
| Formula: (Answers mentioning brand ÷ Total answers mentioning any competitor) × 100 | |
| AI Visibility | The percentage of analyzed prompts where this brand is mentioned. A direct measure of how broadly the brand shows up across the prompt set — irrespective of competitors. |
| Formula: (Answers mentioning brand ÷ Total answers analyzed) × 100 | |
| Citation Share | The percentage of AI answers with citations where this brand's content is referenced as a source. Measures how often AI platforms use the brand's web content to support their responses. |
| Formula: (Answers citing brand ÷ Total answers with citations) × 100 |
4.2 Competitive Leaderboard
The Leaderboard ranks all brands in the analysis — the analyzed brand plus any added competitors — by AI Visibility and Citation Share. It gives a quick, scannable view of where a brand stands relative to the competitive set.
The analyzed brand is always highlighted for quick reference. Clicking through to individual brands shows their full Brand Explorer report.
Columns in the Leaderboard:
- Brand — Brands included in the competitive analysis.
- AI Visibility — Percentage of analyzed prompts where each brand is mentioned.
- Citation Share — Percentage of cited AI answers that reference content from each brand.
4.3 Competitor Visibility Across Topics
This section breaks down how each brand performs across individual topics — subject categories that group related prompts together. You can switch between analyzing by Topic or by another available dimension, and toggle between AI Visibility and Citation Share as the metric.
This view is especially useful for identifying competitive blind spots — topics where competitors rank high but your brand is underrepresented — and for prioritizing content investment accordingly.
4.4 Top Prompts Mentioning the Brand
A ranked table of the prompts where the analyzed brand scores highest in AI visibility. Each row shows the exact prompt, its AI search volume, the brand's rank in the AI answer, all brands mentioned alongside it, and its associated topic.
This is a fast way to understand which queries are already working in your favor — and which competitor brands tend to appear alongside you in those answers.
5. AI Answers Tab
The AI Answers tab gives you a row-by-row view of each AI-generated response analyzed. It lets you drill into which specific answers mention or cite the brand, what rank the brand appears at, and which other brands co-appear in those answers.
Filters available include: AI Platform, Market, Topic, Brands mentioned, Prompt type, and more — allowing you to focus on the most relevant slice of answers.
Column Reference — AI Answers
| Column | Definition |
|---|---|
| Prompt | The exact prompt used to query AI systems and test brand response. |
| Answer | A preview of the AI-generated response. Click to view the full answer, including which brands are mentioned or cited within it. |
| Mentions [Brand] | A checkmark indicates whether this brand appears in the AI answer to this prompt. |
| Cites [Brand] | A checkmark indicates whether this brand's content is cited as a source or reference in the AI answer. |
| Rank | The position at which this brand appears in the AI answer for this prompt. |
| Brands Mentioned | All brands appearing in the AI answer for this prompt, including the analyzed brand and competitors. |
| Market | The geographic region and language context in which this prompt was tested. |
| Topic | The subject category or theme this prompt belongs to — used for grouping and filtering. |
| Industry | The business sector or industry category this prompt belongs to. |
| Answered On | The date the AI answer was generated and analyzed. Answers are periodically refreshed to reflect the most current AI responses. |
6. Citations Tab
The Citations tab shows which specific web pages and domains AI platforms are citing as sources in their answers. This is critical for understanding which content AI systems trust in your competitive space — and how your own content compares to competitors' and third-party sources.
You can switch between viewing cited Webpages and Domains. Filters include: AI Platform, Market, Topic, Prompt type, Source type, and Industry.
Column Reference — Citations
| Column | Definition |
|---|---|
| Cited Page | A specific web page that AI platforms have cited as a source in their answers. |
| Source Type | Identifies whether the cited page belongs to the analyzed brand, a competitor, or a third party (e.g., a media outlet, review site, or aggregator). |
| Citation Share | The percentage of all AI answers (with citations) that reference this specific page. Formula: (Answers citing this page ÷ Total answers with citations) × 100. |
| Citing Answers | The total number of AI answers that cite this page across the analyzed prompt set. |
| Platform | Which AI models — ChatGPT, Claude, Gemini, etc. — have cited this page. Different models may prefer different sources. |
| Topic | The subject or topic of the user query that prompted the AI answer where this page was cited. |
7. Prompts Tab
The Prompts tab provides a comprehensive view of every prompt in the analysis — showing how each one performs across all key GEO metrics. This is the most data-rich view in Brand Explorer, designed for teams that want to go beyond summary metrics and understand performance at the prompt level.
Results are based on up to 15,000 analyzed prompts depending on the breadth of the analysis. Filters include: AI Platform, Market, Topic, Brands mentioned, Prompt type, and Industry.
Column Reference — Prompts
| Column | Definition |
|---|---|
| Prompt | The question or query sent to AI platforms — shows how the brand performs when users ask specific questions. |
| Market | A combination of geographic region and language in which this prompt is tested. Each prompt runs in one specific market. |
| AI Search Volume | Average monthly searches for this prompt over the latest 12 months — an indicator of how often real users are asking this question. |
| AI Answers | Total number of AI answers analyzed for this prompt — the sample size used to calculate visibility, citation share, rank, and other metrics. |
| AI Visibility | Percentage of AI answers for this prompt that mention the analyzed brand. Formula: (Responses mentioning brand ÷ Total responses for this prompt) × 100. |
| Avg. Rank | The brand's average position when it is mentioned in AI responses to this prompt, calculated as the mean of all ranking positions across analyzed responses. |
| Citation Share | Percentage of AI answers for this prompt that cite pages from the brand's domain. Formula: (Answers citing brand's domain ÷ Total answers with citations for this prompt) × 100. |
| Brands Mentioned | A list of all brands appearing in AI-generated responses to this prompt, including the analyzed brand and competitors. |
| Intent | Classification of the user's underlying motivation when asking this prompt — whether they are browsing (N), comparing options (C), transactional/ready to buy (T), or seeking information (I). |
| Industry | The business sector or industry this prompt belongs to. |
| Topic | The subject category or theme this prompt belongs to — topics help group related prompts for easier analysis and tracking. |
| Platform | The AI models (e.g., ChatGPT, Claude, Gemini) to which these prompts were sent. |
8. Related Prompts Tab
The Related Prompts tab surfaces additional prompts from Writesonic's dataset that are topically relevant to the analyzed brand but not necessarily part of the core analysis set. This is a discovery layer that helps you identify prompt opportunities you may be missing in your current GEO tracking setup.
Use this tab to expand your prompt coverage — you can directly add high-opportunity prompts from here into your tracked project.
9. Saving and Managing Reports
Every Brand Explorer analysis is automatically saved to Your Reports on the configuration page. Reports are listed with the brand name, topic (if specified), and the last update timestamp.
- Revisiting a report — Click any saved report to reopen it at any time without spending additional search credits.
- Deleting a report — Reports can be deleted from the listing using the delete (trash) icon.
- Updating a report — Re-running an analysis for the same brand refreshes the data against the latest prompt dataset.
- Exporting data — Use the download icons (available on most tables and charts within the report) to export data for offline reporting and presentations.
10. Understanding Intent Classification
Each prompt in Brand Explorer is tagged with an intent classification that reveals the user's likely motivation when asking the question. Understanding intent helps prioritize which prompts to act on first.
| Intent Tag | What it means |
|---|---|
| N — Navigational | The user is looking for a specific brand, website, or destination. High commercial relevance for brand-specific queries. |
| I — Informational | The user is researching or seeking to understand a topic. Opportunity for thought leadership and educational content. |
| C — Commercial | The user is comparing options or evaluating solutions before making a decision. High purchase intent — ideal for comparison content. |
| T — Transactional | The user is ready to act or buy. The highest-value prompt category for conversion-focused content. |
11. Brand Explorer vs. GEO Project Tracking
Brand Explorer and Writesonic's project-based GEO tracking serve complementary but distinct purposes. Understanding the difference helps teams use the right tool for the right job.
| Brand Explorer | GEO Project Tracking | |
|---|---|---|
| Setup required? | No — instant results | Yes — configure topics, prompts, competitors |
| Purpose | Discover & benchmark | Monitor & optimize over time |
| Data freshness | Snapshot from Writesonic's dataset | Continuously tracked and updated |
| Custom prompts | Not applicable | Yes — add your own prompts |
| Best for | Research, prospect demos, competitive intel | Ongoing GEO strategy and optimization |
12. For Sales, Marketing & Customer Success Teams
Using Brand Explorer in Sales Conversations
- Run a live Brand Explorer analysis on a prospect's brand during a discovery call — immediate, visual impact requires no setup.
- Pull up a competitor comparison to demonstrate how their top competitors are outranking them in AI responses.
- Use the Prompts tab to show the prospect exactly which high-volume queries they're missing visibility on.
- Show the Citations tab to highlight third-party content that is outranking their own pages — positioning content optimization as an urgent need.
Using Brand Explorer in Customer Success & QBRs
- Use Brand Explorer to deliver a competitive benchmarking report during QBRs without pulling from the customer's tracked project budget.
- Cross-reference Brand Explorer data with the customer's GEO project metrics to show progress and gaps.
- Surface new competitor threats by running Brand Explorer on emerging competitors the customer may not be tracking yet.
Using Brand Explorer in Marketing
- Generate data-backed insights for thought leadership content, reports, and case studies using Brand Explorer's competitive analysis.
- Use Brand Explorer to validate messaging claims with real AI visibility data.
- Run Brand Explorer on target accounts for ABM campaigns to personalize outreach with AI visibility insights.
13. Frequently Asked Questions
What is the difference between AI Visibility and Share of Voice?
AI Visibility measures how broadly a brand appears across all analyzed prompts — it's calculated against the total prompt set, not competitors. Share of Voice is a relative metric — it measures what percentage of competitive mentions belongs to the analyzed brand versus all competitors combined. A brand can have high AI Visibility but lower Share of Voice if competitors appear even more frequently.
What is the difference between a mention and a citation?
A mention means the brand name appears in an AI-generated response. A citation means the AI platform actively referenced or linked to the brand's web content as a source for its answer. Mentions reflect brand awareness in AI; citations reflect content authority. Both matter, but for different reasons.
Does running Brand Explorer count against my project's prompt tracking quota?
No. Brand Explorer uses a separate search credit system. Each analysis costs one search credit. Your GEO project tracking runs independently and is unaffected by Brand Explorer usage.
How current is the data in Brand Explorer?
Brand Explorer draws from Writesonic's continuously updated proprietary dataset. AI answers within the report are periodically refreshed to capture the most current AI responses, as reflected in the 'Answered On' column of the AI Answers tab.
Can I analyze any brand — including competitors I don't track in my project?
Yes. Brand Explorer can analyze any brand, including brands not configured in your GEO project. This makes it ideal for ad-hoc competitive research without modifying your project settings.
What does 'Source Type' mean in the Citations tab?
Source Type categorizes the origin of a cited page: Your Brand (content from the analyzed brand's domain), Competitor (content from a competitor's domain), or Third Party (content from an independent source such as a media outlet, review site, or directory). Knowing which source types AI platforms prefer in your space helps you prioritize content and partnership strategies.
What AI platforms are covered in Brand Explorer?
Brand Explorer analyzes AI responses from major platforms including ChatGPT, Claude, Perplexity, and Gemini. Coverage may vary depending on the prompt type and market. You can filter results by platform within any tab.
How is Avg. Rank calculated in the Prompts tab?
Avg. Rank is the arithmetic mean of all ranking positions the brand receives across analyzed AI responses for a given prompt. A lower number (e.g., Rank 1) means the brand is mentioned earlier and more prominently in AI answers.
Can I export Brand Explorer data?
Yes. Most tables and charts within a Brand Explorer report include a download icon that lets you export data as a CSV or image for use in reports, presentations, and stakeholder communications.
What markets does Brand Explorer cover?
Brand Explorer covers the US market as its primary dataset, with results available across multiple geographic and language combinations. The 'Market' column in the Prompts and AI Answers tabs shows the specific market context for each prompt.
What is intent classification and how should I use it?
Intent classification reveals the user's underlying goal when asking a prompt — Navigational (N), Informational (I), Commercial (C), or Transactional (T). Use intent tags to prioritize your GEO efforts: Transactional and Commercial prompts have the highest purchase-intent value, while Informational prompts are best addressed through thought leadership content.
How is Brand Explorer different from Writesonic's GEO project tracking?
Brand Explorer is a discovery and benchmarking tool — it delivers instant results from Writesonic's existing dataset with no setup required. GEO project tracking is your ongoing monitoring layer — it continuously tracks custom prompts, topics, and competitors over time with fresh data collection. The two are complementary: use Brand Explorer to discover and benchmark, then use project tracking to monitor and optimize.
For customer-facing queries, refer to [email protected].
Updated about 3 hours ago
