Documentation

Prompts Dashboard

The Prompts dashboard manages the prompts that are run in Spyglasses AI Visibility Reports. At Spyglasses we call these discovery queries — because they represent how your potential customers discover solutions by asking questions of AI chat assistants when they are specifically looking for products, solutions, or services. The broader AI Visibility industry often refers to them simply as prompts. For the purposes of Spyglasses functionality, the terms are used interchangeably.

What You'll Learn

In this guide, you'll learn:

  • How discovery queries work
  • How queries are generated
  • How to edit and customize queries
  • How to manage your query library
  • How to generate prompts using AI-powered frameworks
  • How to import prompts in bulk from a CSV or pasted list
  • How to measure coverage across the buyer journey and your brand's dimensions
  • How to spot and resolve redundant prompts
  • How to check a new prompt for overlap before saving it

How Discovery Queries Work

When you run an AI Visibility Report, Spyglasses takes your discovery queries and runs them across multiple AI platforms (ChatGPT, Claude, Gemini, Perplexity, Copilot). The report then analyzes which brands get mentioned, cited, and recommended.

If you run a report and there are existing queries here, we use them. If not, we generate five queries by default based on your brand snapshot.

Query Generation

Discovery queries are generated based on the Category Entry Point methodology—understanding how buyers research solutions before they know specific brands.

What Informs Query Generation

Queries are constructed using data from your brand snapshot:

  • Category: What type of product/service you offer
  • ICP (Ideal Customer Profile): Who your target customers are
  • Features: What capabilities you provide
  • Problems Solved: What challenges you address

Query Types We Generate

Each query type represents a different buyer intent pattern:

Query TypeDescriptionExample
category_bestBest-in-class searches"What are the best CRM solutions for small businesses?"
use_case_solutionProblem-first searches"What software helps with customer data management?"
budget_constrainedPrice-sensitive searches"Affordable project management tools for startups"
solution_comparisonComparison searches"Compare different email marketing platforms"
segment_focusedNiche audience searches"Best analytics tools for healthcare companies"
local_servicesLocation-specific searches"Best marketing agencies in Austin, Texas"
local_comparisonLocal provider comparisons"Compare web designers in Chicago"
local_reviewsReview-focused local searches"Top rated accountants near Miami"

Why These Query Types

These represent high-intent queries at the research stage. Buyers asking these questions are actively evaluating solutions and are most likely to be influenced by AI recommendations.

Generating Prompts with AI Frameworks

Beyond the default query types, the Generate Prompts feature lets you create a larger set of prompts powered by multiple marketing and buyer-research frameworks:

  • Standard AI Visibility — the classic buyer-intent query types listed above.
  • Category Entry Points — prompts based on the triggers and cues that cause buyers to enter your category, the moments before they think about any brand.
  • Jobs to Be Done — prompts framed around the functional, emotional, and social jobs your buyers are trying to accomplish.
  • Buyer's Journey — prompts spanning Awareness, Consideration, and Decision stages, from problem discovery to final vendor selection.
  • Stakeholder Perspectives — prompts written from the point of view of different stakeholders involved in the buying decision (e.g., CIO, CFO, end user, parent, partner).

Select one or more frameworks, and Spyglasses will generate up to 25 prompts per framework using your brand snapshot. Generated prompts are saved to this dashboard and can be exported as CSV.

Importing Prompts in Bulk

If you already have a list of prompts — for example, from another AI visibility tool or from a spreadsheet your team maintains — you can import them all at once using the Import button.

Paste a List

  1. Click Import in the toolbar.
  2. Select the Paste tab.
  3. Paste your prompts into the text area. Supported formats:
    • One prompt per line — the simplest option.
    • Comma-separated — multiple prompts on a single line, separated by commas.
    • Tab-separated — multiple prompts on a single line, separated by tabs.
    • CSV with headers — paste the full contents of a CSV and Spyglasses will detect the columns automatically.
  4. Click Parse Prompts to preview them.
  5. Remove any you don't want, then click Import.

Upload a CSV File

  1. Click Import in the toolbar.
  2. Select the Upload CSV tab.
  3. Click the upload area and select a .csv, .tsv, or .txt file.
  4. Spyglasses will parse the file and preview the prompts it found.
  5. Review and click Import.

CSV Column Mapping

Your file must have at least one recognized column for the prompt text. All other columns are optional.

Column Name(s)Maps toRequired
prompt, query, discovery query, textPrompt textYes
type, query typeQuery type (e.g. category_best)No
categoryProduct/service categoryNo
segmentTarget audienceNo
use caseSpecific problem being solvedNo
locationGeographic contextNo
budgetPrice contextNo

Column names are matched case-insensitively and support underscores (e.g. discovery_query) or spaces (e.g. discovery query).

If you have previously exported prompts from Spyglasses as CSV, the exported file uses these same column names and can be re-imported directly into another property.

Prompts without a query type default to category_best. Each prompt must be at least 3 characters long; shorter entries are skipped.

Viewing Your Queries

The main table shows all discovery queries with:

ColumnDescription
QueryThe prompt text
TypeQuery type (category_best, etc.)
CategoryProduct/service category
SegmentTarget audience
LocationLocation context (if any)
StatusActive or inactive
ActionsEdit, deactivate, delete

Editing Queries

Click on a query or the edit icon to open the edit panel. You can modify:

Query Text

The actual prompt that will be run. This directly influences what AI platforms are asked. Make sure it reflects how real customers would search.

Location

For location-specific queries, set the geographic context. Location is embedded in the prompt and affects AI responses.

Metadata Fields

You can also set or update:

  • Category: Product/service category
  • Target Segment: Customer type (SMB, Enterprise, etc.)
  • Use Case: Specific problem being solved
  • Budget Range: Price context

Important: Only the query text and location actually influence the AI prompt. The other fields (category, segment, use case, budget) are for your own classification and analysis. They don't change how AI responds.

Checking a New Prompt for Overlap

When you create or edit a prompt, the dialog includes a Check for overlap button below the query text. Clicking it compares your new query to every active prompt on the property using embedding similarity, so you know whether you're adding something genuinely new or duplicating existing coverage.

You'll see one of three verdicts:

  • Likely duplicate (similarity ≥ 90%) — the new prompt is essentially a restatement of an existing one. Consider editing the original instead of adding a new prompt.
  • Near-duplicate (80–90%) — substantially overlapping. Sometimes intentional (e.g., testing a rephrasing), often redundant.
  • Novel (< 80%) — distinct from anything in your library; safe to add.

Along with the verdict, the top three nearest existing prompts are shown with their similarity scores, query types, and buyer-stage tags. Use them to decide whether to add the new prompt, rewrite it, or drop it in favor of editing an existing one.

The overlap check is not a hard gate — you can save the prompt regardless of the verdict. It's a second opinion meant to keep your library concise.

Activating and Deactivating Queries

Deactivating

If you want to stop using a query but keep the historical data:

  1. Click edit on the query
  2. Toggle "Active" off or click "Deactivate"
  3. Save

Deactivated queries:

  • Won't be used in future reports
  • Retain all historical data
  • Can be reactivated later

Reactivating

To bring back a deactivated query:

  1. Show inactive queries (filter)
  2. Click edit
  3. Toggle "Active" on
  4. Save

Deleting Queries

To permanently remove a query:

  1. Click the delete icon
  2. Confirm deletion

Warning: Deleting a query removes its data from this dashboard. However, it will still appear in historical reports where it was used.

Measuring Coverage & Reducing Redundancy

As your prompt library grows — whether through generation, imports, or hand-crafting — two questions become hard to answer by eye:

  1. Coverage: are we tracking prompts across the full range of buyer scenarios, or are we concentrated in one corner of the market?
  2. Redundancy: are any of our prompts near-duplicates that consume nightly runs without adding signal?

The Coverage tab on this dashboard answers both in one view.

The Coverage Matrix

The matrix is a grid:

  • Rows — buyer-journey stages: Awareness, Consideration, Decision.
  • Columns — dimensions pulled from your brand snapshot: up to 5 customer segments and up to 5 problems you solve.
  • Cells — how many of your active prompts cover that (stage × dimension) intersection.

Cells are color-coded so you can spot gaps at a glance:

ColorMeaning
Green2 or more prompts cover the cell. Well covered.
Amber1 prompt covers it. Thin but present.
Red (+ Gap)No prompts cover it. A coverage gap.
Blue (spinner)A newly generated prompt is being embedded — the cell will refresh automatically once it completes.

Click a non-empty cell to jump to the Prompts tab filtered to exactly the prompts covering that cell. Handy when you have a cluster of similar prompts and want to review or prune them.

Click a red gap cell to generate a new prompt targeting that specific (stage × dimension) — see Filling a Gap below.

The Coverage Score

Above the matrix, the Coverage score card shows a composite 0–100 score and the three sub-scores that feed it:

Sub-scoreWeightWhat it measures
Matrix breadth60%Share of stage × dimension cells with at least one prompt. Dominant because gaps in the matrix are the clearest signal of weak coverage.
Stage coverage25%Prompts per buyer-journey stage vs. minimum targets (2 awareness, 2 consideration, 1 decision), weighted by purchase intent.
Framework spread15%How many of the marketing frameworks (Standard, Category Entry Points, JTBD, Buyer's Journey, Stakeholder Perspectives) are represented.

Each bar shows its percentage, weight, and points contribution to the composite. If the score looks low, the sub-score that's red or amber tells you where to focus.

Why decision-stage is weighted highest

Within the stage sub-score, decision-stage prompts count 3× awareness (awareness=1, consideration=2, decision=3). A buyer asking "which [category] supports [specific constraint] under [budget]?" is closer to a purchase than one asking "what is [category]?" — and visibility in those decision-stage moments has higher commercial value. Adjusting prompts to show up at that stage pays off more in pipeline than filling awareness coverage.

Prioritized Gaps

Below the matrix, the Prioritized gaps card lists empty cells sorted by buyer-stage weight. High-priority gaps are at the decision stage; medium at consideration; low at awareness. Each row tells you the stage, dimension, and a suggested next step.

Spotting Redundant Prompts

The Redundant prompt clusters card groups semantically similar prompts. Two prompts are clustered when their cosine similarity — after a small penalty for prompts at different buyer stages — is ≥ 0.88. Members of a cluster are effectively testing the same buyer intent, consuming nightly runs for overlapping signal.

When you see a cluster, consider:

  • Keeping the most specific or best-worded prompt.
  • Deactivating the others (see Activating and Deactivating Queries).
  • Or leaving them alone if they're intentionally testing the same intent at different buyer stages — the stage-divergence penalty already discounts cross-stage pairs, so flagged clusters are genuinely close.

Filling a Gap

Click any red "+ Gap" cell and a dialog opens to generate a single prompt targeting exactly that (stage, dimension). Under the hood, Spyglasses combines your brand snapshot with stage-specific language guidance:

  • Awareness — problem-discovery language ("what is...", "how do I solve...", "what are the main challenges with...").
  • Consideration — evaluation and comparison language ("best X for Y", "compare approaches to...").
  • Decision — specific-criteria language ("which X offers Y for my budget", "drop-in replacement for Z").

You can:

  • Save — the prompt is persisted and begins embedding in the background. The cell turns blue with a spinner while embedding completes (usually 1–2 seconds), then fills with the new count and the appropriate color. Both the cell and the top-level coverage score update together once the embedding lands.
  • Regenerate — don't love the phrasing? Click Regenerate to produce a different query for the same gap.
  • Cancel — close without saving.

The generated prompts are always tagged with the Buyer's Journey framework and the matching buyers_journey_* query type, so they show up consistently in your filters and framework breakdown.

Best Practices

Match Real Customer Behavior

The default queries represent our best understanding of high-intent user queries at the research stage. But you know your brand best.

If these queries aren't reflective of how you expect your target customer to search in AI:

  • Edit them to match real customer language
  • Add queries based on sales conversations
  • Remove queries that don't match your audience

Need Help?

If you need help coming up with ideas, reach out to support@spyglasses.io. We're happy to help brainstorm and offer suggestions at no charge.

Quality Over Quantity

5-10 well-crafted queries provide more value than 50 mediocre ones. Focus on:

  • Queries your prospects actually ask
  • High-intent buying signals
  • Topics where you can win

The Coverage tab makes this concrete: a library of 50 prompts concentrated in one matrix cell scores worse than a library of 12 prompts that span the matrix. Use the Redundant prompt clusters card to find and prune duplicates, and the Prioritized gaps list to guide what to add next.

Review After Reports

After each AI Visibility Report, review which queries performed well and which didn't. Refine your library based on results.

Include Competitor Comparisons

By default, Spyglasses' queries focus on the category as a whole, but if you want to see how you are positioned against a specific competitor, you can add queries like:

  • "Alternatives to [Competitor]"
  • "[Your Category] like [Competitor] but cheaper"

Query Limits

With any Spyglasses plan you can manage up to ten queries per property.