Risk Assessment Matrix

This matrix helps evaluate AI development patterns and frameworks for client projects, balancing innovation with reliability.

Assessment Criteria

Risk Factors:

  • Client Impact: Potential for project delays or quality issues
  • Security: Data privacy and code security implications
  • Maintainability: Long-term support and debugging complexity
  • Transparency: Client understanding and audit trail clarity
  • Skill Dependency: Team expertise requirements

Scoring: Low (1-3), Medium (4-6), High (7-10)

Framework Pattern Assessment

Task Management

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
Markdown Backlogs21231Low
Structured Text42455Medium
Issue Systems23322Low

Recommendation: Issue systems for client work, markdown for internal projects.

AI Guidance Patterns

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
Command Libraries32546Medium
Coding Standards21232Low
Definition of Done11222Low
Validation Hooks22435Medium

Recommendation: Start with standards and definitions. Add hooks for quality-critical projects.

Multi-Agent Coordination

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
Role Simulation64787High
Swarm Parallelism85999High
Repo Artifacts43544Medium

Recommendation: Avoid multi-agent patterns for client work until ecosystem matures.

Session Management

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
Terminal Orchestration32445Medium
Parallel Worktrees21336Medium
Parallel Containers43657Medium-High

Recommendation: Parallel worktrees for development, containers for specific isolation needs.

Tool Integration

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
MCP Integrations54656Medium
Custom Tools32544Medium
Database Access68465High
Testing Hooks21324Low-Medium

Recommendation: Testing hooks are essential. Custom tools for specific needs. Evaluate MCP carefully.

Development Roles

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
AI as PM83675High
AI as Architect74766High
AI as Implementer32433Medium
AI as QA53444Medium

Recommendation: AI implementation with human oversight. Human PM and architect roles.

Code Delivery

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
Small Diffs22223Low
Feature Flags32445Medium
Full Scaffolds74654Medium-High

Recommendation: Small diffs for production. Feature flags for experimentation. Scaffolds for prototyping only.

Context Preservation

PatternClient ImpactSecurityMaintainabilityTransparencySkill DependencyOverall Risk
Documentation11212Low
Persistent Memory33445Medium
Session Continuity43556Medium

Recommendation: Documentation is essential. Memory and continuity provide efficiency gains.

Client Project Risk Profiles

Conservative (Financial, Healthcare, Government)

  • Use: Issue systems, coding standards, small diffs, documentation
  • Avoid: Multi-agent, AI roles, full scaffolds, database access
  • Evaluate: Testing hooks, custom tools, feature flags

Moderate (Standard Business Applications)

  • Use: All low-risk patterns, selective medium-risk adoption
  • Avoid: High-risk patterns without explicit client approval
  • Experiment: MCP integrations, validation hooks, parallel workflows

Aggressive (Startups, Internal Tools, Prototyping)

  • Use: All patterns based on technical merit
  • Experiment: Multi-agent coordination, full scaffolds, AI roles
  • Monitor: Performance, quality, and maintainability closely

Decision Framework

  1. Assess Client Risk Tolerance: Conservative, Moderate, or Aggressive
  2. Evaluate Pattern Risk: Use matrix scores
  3. Consider Team Capability: Factor in skill dependency scores
  4. Start Conservative: Begin with low-risk patterns
  5. Iterate Carefully: Add complexity only with proven value
  6. Document Decisions: Maintain rationale for pattern choices

Red Flags

Immediate Stop Conditions:

  • AI making architectural decisions without human review
  • Multi-agent systems in production without extensive testing
  • Direct database access without security review
  • Client deliverables generated without human validation
  • Missing audit trails for AI-generated code

Warning Signs:

  • Increasing debugging time for AI-generated code
  • Client confusion about AI involvement in project
  • Team dependency on complex frameworks
  • Reduced code quality or test coverage
  • Difficulty explaining AI decisions to stakeholders