Skip to main content

Agent Alignment Layer

The Agent Alignment Layer lets AI coding assistants (Claude Code, GitHub Copilot, Pylon, etc.) check whether a proposed action conflicts with your team's documented decisions BEFORE taking it autonomously.

How It Works

  1. Agent proposes an action (create Jira ticket, open PR, post Slack message)
  2. Agent calls align.check_proposed_action or POST /alignment/check
  3. Align searches your decision graph using semantic similarity (fast, ~300ms)
  4. If a potential conflict is found, an LLM confirms and explains it
  5. Agent receives: aligned / conflicting / no-context + specific conflict details

MCP Tool: align.check_proposed_action

For AI assistants connected to the Align MCP server:

Input:

{
"action_type": "jira_ticket",
"content": "Migrate user service to MongoDB for flexible schema",
"context": "Team: Backend, Project: AUTH"
}

Response (conflicting):

{
"status": "conflicting",
"confidence": 0.91,
"relevant_decisions": [
{
"id": "d-42",
"title": "Use PostgreSQL for all services",
"summary": "Team decided Postgres is our only data store",
"status": "active",
"similarity": 0.87
}
],
"conflicts": [
{
"decision_id": "d-42",
"title": "Use PostgreSQL for all services",
"reason": "Proposes MongoDB which contradicts the Postgres-only decision",
"severity": "critical",
"suggested_resolution": "Review decision d-42 before proceeding"
}
],
"message": "This action conflicts with 1 existing decision(s). Review the conflicts before proceeding."
}

REST API: POST /alignment/check

For CI pipelines, custom bots, and direct integrations:

curl -X POST https://api.align.tech/alignment/check \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "x-tenant-id: <YOUR_TENANT_ID>" \
-H "Content-Type: application/json" \
-d '{
"action_type": "pull_request",
"content": "Add MongoDB driver to user service",
"context": "Relates to JIRA-1234"
}'

Response Shape

FieldTypeDescription
statusaligned | conflicting | no-contextAlignment result
confidence0-1Confidence in the result
relevant_decisionsarrayRelated decisions found
conflictsarray?Present only when status=conflicting
messagestringHuman-readable summary for agents

Status Meanings

StatusMeaningAgent should...
alignedAction aligns with or is unrelated to existing decisionsProceed normally
conflictingAction contradicts one or more existing decisionsSurface conflicts to user, request confirmation
no-contextNo relevant decisions foundProceed - consider documenting the decision afterward

Action Types

action_typeUse for
jira_ticketJira issue creation
pull_requestGitHub/GitLab PR description
slack_messageSlack or Teams messages
commit_messageGit commit messages
generalAny other action

Performance

  • Fast path (vector search only): ~300-500ms - returned when no strong match found
  • LLM path (conflict confirmed): ~1.5-2.5s - returns specific conflict details
  • Hard LLM timeout: 1.5s - falls back to aligned with confidence: 0.5 if exceeded

Agent Integration Example

When an agent is about to create a Jira ticket:

const check = await mcp.callTool('align.check_proposed_action', {
action_type: 'jira_ticket',
content: `${ticket.title}\n\n${ticket.description}`,
context: `Project: ${ticket.project}`,
});

if (check.status === 'conflicting') {
return `This ticket conflicts with existing decisions:\n${
check.conflicts.map(c => `- ${c.title}: ${c.reason}`).join('\n')
}\n\nProceed anyway?`;
}

Setup

The align.check_proposed_action tool is available on the Align MCP server (port 8089). Connect your AI assistant to the Align MCP server - see the AI Assistants (MCP) guide for setup instructions.

For direct REST API access, authenticate with your API key and tenant ID as shown in the REST example above.