Bug Report Enrichment: What Metadata Developers Need

3 min read|Concept
Bug Report Enrichment
The automatic capture and AI classification of technical metadata at the moment feedback is submitted — element identity, computed styles, accessibility data, viewport context, screenshot, and AI triage — transforming a user's one-line comment into a structured engineering report.

Why It Matters

Most bug reports arrive as text: "it's broken." The developer then spends 10-30 minutes reproducing: finding the element, opening DevTools, checking styles, testing viewports. The report contains the complaint but not the context. The developer has to reconstruct everything the reporter could see but did not capture.

Enrichment captures all of that at the moment of feedback — before the user even finishes typing. When the comment is submitted, the element identity, computed styles, accessibility data, viewport dimensions, and a cropped screenshot are already attached. The AI layer adds classification, urgency, and a developer triage with likely causes and where to look.

This is the capture mechanism that prevents The Feedback Collapse. Even when feedback must travel to Linear or Slack, the full context stack travels with it. The issue in Linear carries the same signal as the comment on the element.

The Context Stack

Every piece of feedback automatically captures six layers of metadata. No manual input required beyond clicking the element and typing a comment.

The 6-Layer Context Stack
LayerWhat's CapturedWhy It Matters
Element IdentityTag, text, role, CSS selector, data-feedback-idDeveloper knows exactly which element
Computed StylesColors, fonts, spacing, z-index, dimensionsVisual bugs described with actual CSS values
AccessibilityContrast ratio, ARIA attributes, WCAG complianceAccessibility violations caught automatically
ViewportScreen dimensions, device type, scroll position'Works on my machine' eliminated
ScreenshotCropped viewport with element bounding box overlayVisual proof alongside structured data
AI ClassificationCategory/intent, urgency, interpretation, developer triageDeveloper reads a diagnosis, not a complaint

Together, these six layers transform a one-line comment into a structured engineering report. The user says "this looks wrong" — the system captures why it looks wrong, what CSS values are involved, whether it fails accessibility standards, and what viewport the user was on.

Manual Bug Report vs. Enriched Report

Manual vs. Enriched Bug Reports
AspectManual Bug ReportEnriched Report (Lay)
Element identificationThe button on the checkout page<button data-testid='checkout-submit'> at CSS path main > form > button:nth-child(3)
Visual contextIt looks wrongfont-size: 14px, color: #6B6860, background: #FAFAF7, contrast: 3.0:1 (fails AA)
Reproduction conditionsOn my phoneiPhone 12 Pro Max, 390x844, iOS Safari, scroll position 420px
ScreenshotUser-captured, possibly staleAuto-captured viewport with element bounding box overlay
ClassificationUnclassified — developer guesses priorityAI: bug_report, urgency: high, category: accessibility
Developer triageNone — developer starts from scratchLow contrast between text and background. Likely causes: contrast not tested on mobile breakpoint, theme variable override. Where to look: media queries, contrast ratio in theme vars.
Time to action10-30 minutes (reproduction + investigation)2-3 minutes (read diagnosis, go to code)

The difference is not just more data — it is actionable data. A manual report requires the developer to start an investigation. An enriched report delivers the investigation results alongside the complaint. Time to action drops from 10-30 minutes to 2-3 minutes.

How Enrichment Works

Enrichment Pipeline
1
Capture
Client-side metadata in milliseconds
2
Store
Comment saved with element_metadata
3
Enrich
Async AI classification + triage
4
Sync
Full context to Linear/Slack

Capture

Client-side: captureElementMetadata() collects all six layers in milliseconds via getComputedStyle(), DOM traversal, and viewport measurement. No network calls. The metadata is ready before the user finishes typing their comment.

Store

The comment is saved to the database with element_metadata populated. The response returns immediately to the user — they see their comment appear without waiting for AI processing.

Enrich

Asynchronous: AI classifies the comment with mode-aware prompts. Review mode generates a category (visual, accessibility, layout, copy, interaction), interpretation, and CSS fix suggestions. Support mode generates an intent (bug_report, feature_request, confusion, complaint, question, praise), urgency level, and structured developer triage.

Sync

The enriched report syncs to connected integrations. Linear issues include the full Developer Handoff Pack: original comment, element metadata, annotated screenshot, and AI-generated triage. Slack notifications include a summary with the key context layers.

AI Classification by Mode

The AI adapts its output to the feedback context. Design review and customer support require different classification systems.

AI Classification by Mode
AspectReview ModeSupport Mode
ClassificationCategory: visual, accessibility, layout, copy, interactionIntent: bug_report, feature_request, confusion, complaint, question, praise
OutputInterpretation + CSS fix suggestionsSummary + urgency + developer triage (element summary, what's happening, likely causes, where to look)
AudienceDesigner/developer reviewing stagingSupport team triaging production reports

In review mode, a comment like "this text is hard to read" gets classified as accessibility, with the interpretation noting the low contrast ratio (captured from computed styles) and suggesting specific color values that would pass WCAG AA. In support mode, the same comment gets classified as bug_report with urgency based on the element's prominence and the user's context.

Frequently Asked Questions
What is bug report enrichment?
Bug report enrichment is the automatic capture and AI classification of technical metadata at the moment feedback is submitted. Instead of a plain text comment, the system captures element identity, computed styles, accessibility data, viewport context, a screenshot, and AI classification — transforming a one-line comment into a structured engineering report.
What is The Context Stack?
The Context Stack is six layers of metadata captured automatically when a user submits feedback: element identity, computed styles, accessibility data, viewport context, screenshot, and AI classification. No manual input is required beyond clicking the element and typing a comment.
Does enrichment work without AI enabled?
Partially. The first five layers (element identity through screenshot) are captured client-side with zero AI. The sixth layer (AI classification and developer triage) requires AI enrichment to be enabled. Even without AI, the structured metadata is far more useful than a plain text report.
How does AI developer triage work?
For bug reports and confusion intents, the AI generates a structured triage: element summary (what it is), what's happening (description using actual metadata values), likely causes (2-3 probable technical issues), and where to look (2-3 investigation starting points in the codebase). This appears in the dashboard and syncs to Linear.
Does enrichment slow down comment submission?
No. Metadata capture is synchronous and takes milliseconds (all via getComputedStyle()). AI enrichment runs asynchronously after the comment is stored — the user sees their comment immediately. If AI enrichment fails, the comment still exists with full metadata.
Summary
DefinitionThe automatic capture and AI classification of technical metadata at the moment feedback is submitted — element identity, computed styles, accessibility data, viewport context, screenshot, and AI triage — transforming a user's one-line comment into a structured engineering report.
Key ConceptsWhy It Matters, The Context Stack, Manual Bug Report vs. Enriched Report, How Enrichment Works, AI Classification by Mode
FrameworkThe Context Stack