SYSTEM GUIDE

How Innovation Microscope Makes Decisions

This product does not make one-shot decisions. It runs a structured, multi-turn decision stress test using AI personas, a deterministic policy engine, and an event-based evidence log. The result is a decision process simulation, not a single opaque answer.

Project Brief

INPUT LAYER

Best for business/product framing. This is the fastest way to give the workshop enough context to produce focused proposals and meaningful objections.

Problem Statement

What decision needs to be made and why it matters now.

Scope

What is in scope, out of scope, and what timeframe the workshop should target.

Background

Business context, prior attempts, constraints already known, and current status.

Dependencies

Teams, systems, vendors, approvals, data inputs, and sequencing dependencies.

Underlying Issues

Root causes, hidden risks, friction points, and assumptions likely driving bad decisions.

Target Group

User/customer/internal stakeholder group affected by the outcome.

RFC / Technical Design Doc

ENGINEERING CONTEXT

Best when the decision includes implementation risk. This helps technical roles and governance roles challenge feasibility, architecture quality, and rollout safety.

Architecture options and trade-offs
Integration boundaries and system dependencies
Security, compliance, and data handling constraints
Rollout, rollback, and operational risk plan
Success metrics, acceptance criteria, and observability needs
Tip: You can paste a brief summary instead of the full RFC. What matters most is the decisions, constraints, dependencies, and trade-offs.

Architecture Flow (Decision Pipeline)

VISUAL MAP
STEP 1
User Input
Project Brief / RFC
STEP 2
Facilitator LLM
Question + Speaker Selection
STEP 3
Persona Agents
Ideas / Objections / Assumptions
STEP 4
Policy Engine
PASS / FLAG / BLOCKED
STEP 5
Event Log
Source of Truth
STEP 6
Next Turn Orchestration
Loop / Coverage / Evolution
STEP 7
Synthesis Report
Metrics + Narrative

Decision Mechanism (Hybrid)

LLM + RULES + EVENTS
1

Facilitator Orchestration (LLM-guided)

Chooses the next question and suggested speakers, with strong guardrails for conflict, loop-breaking, and idea evolution.

2

Persona Responses (LLM-guided)

Each selected role proposes ideas, raises objections, and surfaces assumptions from its perspective.

3

Policy Engine (Deterministic)

Checks each proposal against enabled policy rules and marks it PASS, FLAG, or BLOCKED depending on policy mode.

4

Event Log + Artifacts (Deterministic)

Stores structured events (ideas, objections, blocks, constraints) and becomes the source of truth for later turns and reporting.

5

Synthesis Report (Hybrid)

Narrative is generated by an LLM, but key metrics like consensus and dissent are computed from event data first.

How a Decision Evolves Across Turns

1) Facilitator asks a targeted question and picks 2-3 speakers 2) Personas generate proposals + objections + assumptions 3) Policy engine checks each proposal (PASS / FLAG / BLOCKED) 4) Event log records what happened (idea, objection, policy block, constraint) 5) Orchestrator detects loops, silent roles, and blocked-idea evolution gaps 6) Next turn is re-framed with stricter constraints and better targeting 7) Final report uses event-derived metrics + evidence-based narrative synthesis

Why It Feels Complex

â—†Persona mix changes the type and intensity of objections.
â—†Policy mode (strict / advisory / off) changes which ideas survive.
â—†Brief quality directly affects ambiguity, loop risk, and proposal specificity.
â—†Dependencies and constraints increase trade-off complexity across turns.
â—†LLM outputs are probabilistic, so exact wording varies even with the same setup.
â—†Prompt guardrails reduce drift, but do not eliminate model behavior variance.

Recommended Input for Best Results

Paste a Project Brief first. Add RFC/technical context when the decision includes architecture, rollout, compliance, or integration risk.