PublicRisk.ai

Query Explorer

AI-powered semantic search with DSPy optimization and domain-specific adapters

Overview

Query Explorer is PublicRisk.ai's AI-powered semantic search tool that provides instant answers from your document library with source citations and context.

DSPy Optimized: Achieving ~92% accuracy with 1000+ expert-labeled training examples


Features

Natural Language Queries

Ask questions in plain English - no special syntax required.

Example Queries:

✓ "What flood mitigation measures are required for Zone AE properties?"
✓ "Compare wind damage requirements between ISO and ASCE 7-22"
✓ "What's the maximum deductible for earthquake insurance in California?"
✓ "Show me all claims related to Hurricane Harvey in Houston"

DSPy ReAct Pipeline

Query Explorer uses ReAct (Reasoning + Acting) - a teacher-student optimization framework:

Benefits:

  • Higher Accuracy: 92% vs 75% baseline
  • Better Citations: More relevant source documents
  • Faster Responses: 2-3s vs 5-8s
  • Multi-turn: Remembers conversation context

Domain-Specific Adapters

Select from 35 specialized LoRA adapters trained for public sector, insurance, enterprise, and environmental domains. Each adapter is fine-tuned on 1,500-2,000 domain-specific examples.

Sample Adapters by Category:

Public Sector:

  • public_education - K-12 schools and districts
  • municipal_codes - Local ordinances and regulations
  • law_enforcement - Police, sheriff, corrections
  • emergency_management - Disaster response and preparedness
  • utilities - Water, power, gas distribution

Insurance & Risk:

  • insurance - General insurance operations
  • liability - General and professional liability
  • workers-comp - Workplace injuries and claims
  • school-risk - K-12 school safety

Enterprise:

  • cybersecurity - IT security, data protection
  • healthcare - Medical facilities, HIPAA compliance
  • financial - Credit risk, fraud prevention
  • legal - Contracts, litigation
  • hr-employment - Employment law, workplace safety

Environmental:

  • environmental - EPA compliance, hazards
  • climate - Climate change impacts
  • natural_disasters - Earthquake, flood, hurricane
  • nepa - National Environmental Policy Act

All 35 domains: academic_research, CA_education_code, CA_government_code, climate, cybersecurity, education, emergency_management, environmental, financial, geopolitical, healthcare, hr-employment, infrastructure, insurance, insurance_exposures, law_enforcement, legal, liability, municipal, municipal_codes, natural_disasters, nepa, operational, procurement, property, public_education, public_financing, regulatory, reputational, risk-analysis, school-risk, supply_chain, technology, utilities, workers-comp


How to Use

Step 1: Access Query Explorer

Navigate to Query Explorer from the sidebar or press Ctrl+K (Windows) / Cmd+K (Mac).

Step 2: Select Domain Adapter (Optional)

Choose an industry-specific adapter for better results:

// Dropdown selection
Domain: Healthcare ▼

If unsure, leave as "General (DSPy Optimized)" - works for all queries.

Step 3: Enter Your Question

Type naturally - no special formatting needed:

Good Examples:

  • ✅ "What are the flood insurance requirements for commercial properties?"
  • ✅ "Compare NFIP and private flood coverage limits"
  • ✅ "Show me claims data for wildfires in California 2023"

Avoid:

  • ❌ "flood insurance" (too vague)
  • ❌ "NFIP Section 42 USC 4001" (too technical - AI will find this for you)

Step 4: Review Results

Results include:

  1. AI-Generated Answer - Direct answer to your question
  2. Source Citations - Documents used (clickable)
  3. Confidence Score - How certain the AI is (0-100%)
  4. Related Questions - Suggested follow-ups

Example Output:

Question: "What flood mitigation measures are required for Zone AE properties?"

Answer:
For properties in FEMA Zone AE (Special Flood Hazard Area), the following 
mitigation measures are required:

1. **Elevation Requirements** (44 CFR §60.3(c)(2))
   - Lowest floor must be at or above Base Flood Elevation (BFE)
   - Includes basements, crawl spaces, and attached garages
   
2. **Foundation Requirements**
   - Must be designed to withstand flood forces
   - Anchored to prevent flotation, collapse, or lateral movement
   
3. **Flood Vents** (if using enclosed area below BFE)
   - Minimum 1 sq. inch per 1 sq. foot of enclosed area
   - Located within 1 foot above adjacent grade

Sources:
  [1] FEMA Technical Bulletin 1-93 (page 12)
  [2] 44 CFR Part 60 - Floodplain Management
  [3] ASCE 24-14 Flood Resistant Design

Confidence: 94%

Related Questions:
  • "What's the cost of elevating a house to meet BFE?"
  • "Are there exceptions to Zone AE elevation requirements?"
  • "How do I calculate Base Flood Elevation?"

Step 5: Follow-Up Questions

Query Explorer remembers conversation context:

Multi-Turn Example:

You: "What flood mitigation measures are required for Zone AE?"
AI: [Answer about elevation, foundation, vents...]

You: "What about the cost?"
AI: [Understands you mean flood mitigation costs for Zone AE]
    "Elevation costs for Zone AE properties typically range from..."

Advanced Features

Search Filters

Refine results by document type, date, or category:

// Filter panel
Document Type: [All ▼]
  □ Policies
  □ Regulations
  □ Claims
  □ Reports

Date Range: [Last Year ▼]
  ○ Last Month
  ○ Last Year
  ● All Time
  ○ Custom Range

Category: [Any ▼]
  □ Flood
  □ Earthquake
  □ Hurricane
  □ Wildfire

Bookmarks

Save important queries for later:

// Click bookmark icon on any result
[⭐ Bookmark]

// Access from sidebar
My Bookmarks (12)
  - Flood Zone AE Requirements
  - NFIP vs Private Flood Comparison
  - CA Wildfire Mitigation Standards
  ...

Export Results

Export answers in multiple formats:

  • PDF - Formatted report with citations
  • DOCX - Editable Word document
  • Markdown - Plain text with links
  • JSON - Raw data for analysis

Example PDF Export:

[Header: PublicRisk.ai Query Export]

Query: "What flood mitigation measures are required for Zone AE properties?"
Date: December 4, 2025
Adapter: General (DSPy Optimized)

Answer:
[Full AI response...]

Sources:
1. FEMA Technical Bulletin 1-93
   URL: https://www.fema.gov/...
   Excerpt: "Lowest floor must be..."
   
2. 44 CFR Part 60
   ...

Search History

View past queries and re-run them:

// History panel
Recent Queries (50)
  [Dec 4, 2025 10:15 AM] "Zone AE flood requirements"
  [Dec 4, 2025 09:42 AM] "NFIP coverage limits"
  [Dec 3, 2025 04:30 PM] "Wildfire VHFHSZ definition"
  ...

[Clear History] [Export History]

Cyber Detection & FAIR Integration (NEW)

Automatic Cyber Risk Detection

Query Explorer now automatically detects cyber security queries and suggests quantitative analysis when appropriate.

8-Tier Detection Algorithm:

  1. CVE Patterns (Confidence: 95%)

    • Format: CVE-YYYY-NNNN
    • Example: "What's the risk of CVE-2024-3094?"
  2. Ransomware Keywords (Confidence: 90%)

    • LockBit, BlackCat, Royal, Conti, REvil
    • Example: "How to protect against LockBit 3.0?"
  3. Phishing & Social Engineering (Confidence: 85%)

    • BEC, spear phishing, credential harvesting
    • Example: "What's the cost of a phishing breach?"
  4. Infrastructure Terms (Confidence: 80%)

    • SCADA, ICS, OT, critical infrastructure
    • Example: "Water utility SCADA vulnerabilities"
  5. Vendor Names (Confidence: 75%)

    • Microsoft, Cisco, VMware, SolarWinds
    • Example: "Microsoft Exchange zero-day risk"
  6. Threat Actor Groups (Confidence: 90%)

    • APT28, Lazarus, FIN7, DarkSide
    • Example: "FIN7 targeting financial sector"
  7. Sector-Specific (Confidence: 70-85%)

    • K-12, healthcare, finance (with cyber context)
    • Example: "K-12 ransomware preparedness"
  8. Financial Keywords (Confidence: 70%)

    • Cost, budget, loss, impact, insurance
    • Example: "Budget for cyber security incidents"

Powered by cyberQueryService: Real-time CISA KEV database integration for vulnerability intelligence


Smart FAIR Analysis Hints

When Query Explorer detects a cyber query with financial keywords, a smart hint appears:

┌─────────────────────────────────────────────────┐
│ 💡 Cyber Risk Detected (92% confidence)        │
│                                                 │
│ For quantitative risk analysis with Monte      │
│ Carlo simulation and ALE calculations,         │
│ try FAIR Analysis.                             │
│                                                 │
│          [📊 Run FAIR Analysis]                │
└─────────────────────────────────────────────────┘

Trigger Conditions:

  • Cyber confidence ≥ 80%
  • Financial keywords present: "cost", "budget", "loss", "impact", "insurance", "ALE"

What happens when you click:

  1. FAIR Analysis page opens in new tab
  2. Your query is pre-filled as the risk scenario
  3. Cyber context passed (query type, confidence score, sector)
  4. Optional: Auto-run Monte Carlo simulation

Cross-Tool Workflow Example

Scenario: User wants ransomware financial impact

Step 1: Ask in Query Explorer

Query: "What's the cost of ransomware for K-12 schools?"

Step 2: Cyber Detection

✅ Cyber query detected: ransomware, confidence: 0.92
💡 Smart hint appears with "Run FAIR Analysis" button

Step 3: Click to FAIR

→ Opens FAIR Analysis with pre-filled scenario
→ Shows cyber enhancement: "CISA KEV enrichment active"

Step 4: Quantitative Result

Monte Carlo Simulation (10,000 runs):
- Average Annual Loss: $1,040,000
- 90th Percentile: $2,100,000
- Budget recommendation: $2.1M for insurance

Step 5: Return with Context

→ Click "Explore in Query Explorer"
→ Ask: "What controls reduce this risk by 50%?"
→ AI provides mitigation advice with cost/benefit

Total time: 3-5 minutes for complete qualitative + quantitative analysis


FAIR Context Badges

When you navigate from FAIR Analysis back to Query Explorer, a context badge appears:

┌─────────────────────────────────────────┐
│ 📊 FAIR Analysis Context                │
│ • Scenario: Ransomware attack...        │
│ • Average Loss: $1,040,000              │
│ • 90th Percentile: $2,100,000           │
│ • Risk Range: $250K - $3.6M             │
└─────────────────────────────────────────┘

This reminds the AI of your quantitative analysis, so mitigation advice is financially calibrated.

Example:

Query: "What controls should we prioritize?"

AI Response (with FAIR context):
"Based on your ALE of $1.04M, prioritize:

1. Immutable Backups ($50K investment)
   → Reduces ALE by $340K (33% reduction)
   → ROI: 580% in year 1
   
2. EDR with Ransomware Rollback ($80K/year)
   → Reduces ALE by $425K (41% reduction)
   → ROI: 431% ongoing
   
Budget-conscious: Start with #1 for fastest payback"

Dual-Framing Analysis

Production Feature

UI COMPLETE - Framing mode selector available now. Backend integration in progress for loss/gain-framed response generation.

Overview

Query Explorer now supports Dual-Framing based on Prospect Theory (Tversky & Kahneman, 1981) - the same query can be answered with loss-focused, gain-focused, or neutral framing to reveal how presentation affects risk perception.

Why It Matters: The famous "Asian Disease Problem" showed that 72% chose certainty when told "200 out of 600 will be saved" (gain-framed), but 78% chose risk when told "400 out of 600 will die" (loss-framed) - same numbers, opposite choices.

Framing Theory

Tversky & Kahneman (1981) demonstrated that how you frame information dramatically changes decisions:

The Asian Disease Problem (Original Experiment):

A disease outbreak will kill 600 people. Choose between:

GAIN-FRAMED VERSION:

  • Program A: 200 people will be saved (certain)
  • Program B: 1/3 probability that 600 will be saved, 2/3 probability that nobody will be saved

Result: 72% chose Program A (certain gains → risk-averse)

LOSS-FRAMED VERSION:

  • Program A: 400 people will die (certain)
  • Program B: 1/3 probability that nobody will die, 2/3 probability that 600 will die

Result: 78% chose Program B (certain losses → risk-seeking)

Key Insight: Mathematically identical choices produce opposite preferences. This is not irrationality—it's a systematic cognitive pattern called loss aversion (losses hurt ~2.25× more than equivalent gains).

Framing Modes in Query Explorer

1. Neutral Framing (Default)

Balanced, objective presentation without emotional emphasis.

Example:

Query: "What are the risks of ransomware attacks for school districts?"

Neutral Response:
"Ransomware attacks affect approximately 40% of school districts annually, 
with an average recovery cost of $1.2M per incident. Districts with incident 
response plans recover 60% faster than those without. [CISA 2024]"

When to Use:

  • Initial analysis without stakeholder bias
  • Technical documentation
  • Regulatory compliance assessment

2. Loss-Focused Framing

Emphasizes risks, threats, failures, and negative outcomes.

Example:

Query: "What are the risks of ransomware attacks for school districts?"

Loss-Framed Response:
"⚠️ RISK-FOCUSED VIEW:
60% of school districts remain vulnerable to ransomware attacks, with 
potential losses averaging $1.2M per incident. Without proactive incident 
response plans, districts face recovery delays exceeding 3 months, resulting 
in extended operational disruptions and data exposure risks. [CISA 2024]"

When to Use:

  • Justifying security investments (emphasize what you'll avoid losing)
  • Crisis communication (urgency drives action)
  • Regulatory compliance arguments (highlight penalties for non-compliance)

3. Gain-Focused Framing

Emphasizes benefits, opportunities, successes, and positive outcomes.

Example:

Query: "What are the risks of ransomware attacks for school districts?"

Gain-Framed Response:
"✅ OPPORTUNITY-FOCUSED VIEW:
Districts with incident response plans successfully prevent 40% of ransomware 
attacks and recover 60% faster when attacks occur. Proactive cybersecurity 
investments save an average of $1.2M per avoided incident while protecting 
student data and maintaining operational continuity. [CISA 2024]"

When to Use:

  • Securing budget approval (emphasize what you'll gain/save)
  • Positive stakeholder engagement (build confidence)
  • Change management (focus on benefits of new approach)

UI Controls

Framing Mode Selector:

  • 3-button toggle: ⚠️ Loss Frame | ⚖️ Neutral | ✅ Gain Frame
  • Located above query input field
  • Tooltips explain each mode's Prospect Theory foundation

Response Indicators:

  • Active framing mode displayed in Alert banner
  • Color-coded: Warning (orange) for loss, Info (blue) for neutral, Success (green) for gain
  • Explanation of loss aversion coefficient (λ=2.25) when non-neutral

Framing Strategy Guide

Policy Communication Matrix:

GoalRecommended FramingExample Use Case
Justify SpendingGain"This investment will save $1.2M/year in avoided incidents"
Create UrgencyLoss"Without action, we risk $1.2M losses per incident"
Technical AnalysisNeutral"40% incident rate, $1.2M average cost, 60% improvement with plan"
Overcome ResistanceGain"Early adopters see 40% reduction in successful attacks"
Regulatory ComplianceLoss"Non-compliance penalties reach $500K per violation"
Change ManagementGain → NeutralStart positive, then provide balanced view

Pro Tip: For politically sensitive decisions, generate all three framings and choose based on your audience:

  • Board of Directors → Gain (focus on ROI, benefits)
  • Auditors/Regulators → Loss (emphasize risks avoided)
  • Technical Staff → Neutral (balanced analysis)

Academic Foundation

Tversky, A., & Kahneman, D. (1981).
"The Framing of Decisions and the Psychology of Choice"
Science, 211(4481), 453-458.
https://doi.org/10.1126/science.7455683

Key Findings:

  • Framing Effect: Same choice, different presentations → opposite preferences
  • Loss Aversion: λ = 2.25 (losses hurt 2.25× more than equivalent gains)
  • Risk Attitudes: Risk-averse for gains, risk-seeking for losses
  • Certainty Effect: Prefer certain outcomes over probabilistically superior alternatives

Related Nobel Prize:
Daniel Kahneman received the 2002 Nobel Prize in Economics for Prospect Theory, which provides the mathematical foundation for framing effects.

Usage Guide

Select Framing Mode

Click one of the three framing mode buttons:

  • ⚠️ Loss Frame (risk/threat emphasis)
  • ⚖️ Neutral (balanced, default)
  • Gain Frame (opportunity/benefit emphasis)

Enter Your Query

Type your question as normal. Query content doesn't change—only the response framing.

Review Framed Response

Response will be generated (backend integration pending) with appropriate framing:

  • Loss: Emphasizes risks, vulnerabilities, threats
  • Neutral: Balanced facts and statistics
  • Gain: Emphasizes opportunities, successes, benefits

Compare Framings

For critical decisions:

  1. Generate query in Neutral mode (baseline)
  2. Regenerate in Loss mode (understand worst-case perception)
  3. Regenerate in Gain mode (understand best-case perception)
  4. Use insights to craft communication strategy

Current Status:

  • ✅ UI Controls: Complete (framing mode selector functional)
  • ⏳ Backend Integration: In progress (AI models being trained for dual-framing)
  • 📋 Expected: Q1 2026 (full dual-framing response generation)

Practical Examples

Example 1: Cybersecurity Investment

Neutral: "40% of organizations experience at least one security incident annually, with average remediation costs of $1.2M."

Loss-Framed: "⚠️ 60% of organizations remain vulnerable to security breaches, facing potential losses averaging $1.2M per incident without proactive security investments."

Gain-Framed: "✅ Organizations with robust security programs prevent 40% of incidents, saving an average of $1.2M per avoided breach while protecting critical data."

Example 2: Flood Insurance Policy

Neutral: "NFIP residential policies cover up to 250Kforstructureand250K for structure and 100K for contents in high-risk zones."

Loss-Framed: "⚠️ Without flood insurance, homeowners risk total loss of up to 350Kinstructureandcontentsduringfloodevents.Federaldisasterassistanceaveragesonly350K in structure and contents during flood events. Federal disaster assistance averages only 5K per household."

Gain-Framed: "✅ NFIP policies provide up to $350K in protection, ensuring homeowners can rebuild and replace belongings after flood events while maintaining financial security."


Best Practices

Writing Effective Queries

✅ DO:

Be Specific:

Good: "What's the maximum coverage limit for NFIP residential policies in 2025?"
Bad:  "Tell me about flood insurance"

Ask One Question at a Time:

Good: "What are the elevation requirements for Zone AE?"
      (Then follow up) "What about the cost?"
Bad:  "What are Zone AE requirements and how much does it cost and what about permits?"

Use Follow-Ups:

First:  "What is FEMA Zone AE?"
Second: "What are the insurance requirements?"
Third:  "How does this compare to Zone X?"

❌ DON'T:

Don't Use Jargon (Unless Necessary):

Bad:  "44 CFR §60.3(c)(2) applicability"
Good: "What properties must follow flood elevation rules?"

Don't Ask Multiple Questions:

Bad: "What's Zone AE and what's BFE and what about Zone X and do I need insurance?"

Choosing the Right Adapter

Quick Selection Guide:

Question TopicRecommended AdapterExample
K-12 schools, districts, student safetypublic_education"School board liability exposure"
Local government operationsmunicipal / municipal_codes"City council meeting requirements"
Police, corrections, public safetylaw_enforcement"Use of force policy audit"
Disaster preparedness, responseemergency_management"Hurricane evacuation timeline"
Medical facilities, patient datahealthcare"HIPAA cloud storage compliance"
Banks, credit unions, fraudfinancial"ACH fraud prevention controls"
IT security, ransomwarecybersecurity"Incident response plan template"
Environmental hazards, EPAenvironmental"Stormwater permit requirements"
Workplace injuries, claimsworkers-comp"Remote worker injury coverage"
Property damage, liabilityliability / property"Slip and fall liability exposure"
General risk questionsGeneral (DSPy)Works for all query types

Tip: When in doubt, use General (DSPy Optimized) - it works for all query types. For specialized questions, domain adapters provide 15-25% accuracy improvement.


Technical Details

RAG (Retrieval Augmented Generation)

Query Explorer uses RAG to provide accurate, cited answers:

Parameters:

  • Embedding Model: nomic-embed-text (768 dim, FREE)
  • Chunk Size: 1000 tokens
  • Overlap: 200 tokens
  • Top-K: 5 most relevant chunks
  • Rerank: Yes (cross-encoder)

DSPy Optimization

Training Process:

  1. Data Collection: 1000+ expert Q&A pairs
  2. Labeling: Insurance professionals validate answers
  3. Optimization: DSPy trains on labeled data
  4. Evaluation: Test on held-out set (92% accuracy)
  5. Deployment: Deploy optimized prompts to production

Metrics:

MetricBefore DSPyAfter DSPyImprovement
Accuracy75%92%+23%
Citation QualityGoodExcellentMajor
Response Time5-8s2-3s40-60%
Multi-turn ConsistencyFairExcellentSignificant

API Integration

Endpoint

https://publicrisk--dspy-optimized-service-fastapi-app.modal.run

Example Request

// POST /query
{
  "query": "What are the flood insurance requirements for commercial properties?",
  "domain_adapter": "general",
  "conversation_id": "conv-uuid-123",
  "max_sources": 5
}

Example Response

{
  "answer": "For commercial properties, flood insurance requirements vary...",
  "sources": [
    {
      "title": "FEMA Technical Bulletin 1-93",
      "url": "https://www.fema.gov/...",
      "excerpt": "Commercial buildings must...",
      "relevance": 0.94
    }
  ],
  "confidence": 0.92,
  "conversation_id": "conv-uuid-123",
  "related_questions": [
    "What's the maximum NFIP coverage for commercial properties?",
    "Are there private flood insurance options?"
  ]
}

Troubleshooting

Common Issues

Issue: "No relevant documents found"

  • Cause: Query too specific or document library empty
  • Solution:
    1. Broaden your query
    2. Check if documents are uploaded (Admin Portal → Documents)
    3. Try a different domain adapter

Issue: "Service timeout after 30s"

  • Cause: Modal cold start (service sleeping)
  • Solution: Wait 30-60s and retry. Subsequent queries will be fast (less than 3 seconds).

Issue: "Low confidence score (less than 50%)"

  • Cause: Ambiguous query or insufficient document coverage
  • Solution:
    1. Rephrase query to be more specific
    2. Upload relevant documents to RAG store
    3. Contact admin to train domain adapter

Issue: "Citations not clickable"

  • Cause: Document not in RAG store or URL missing
  • Solution: Admin must re-upload document with proper metadata

Next Steps

On this page