Agent Alignment Layer
The Agent Alignment Layer lets AI coding assistants (Claude Code, GitHub Copilot, Pylon, etc.) check whether a proposed action conflicts with your team's documented decisions BEFORE taking it autonomously.
How It Works
- Agent proposes an action (create Jira ticket, open PR, post Slack message)
- Agent calls
align.check_proposed_actionorPOST /alignment/check - Align searches your decision graph using semantic similarity (fast, ~300ms)
- If a potential conflict is found, an LLM confirms and explains it
- Agent receives:
aligned/conflicting/no-context+ specific conflict details
MCP Tool: align.check_proposed_action
For AI assistants connected to the Align MCP server:
Input:
{
"action_type": "jira_ticket",
"content": "Migrate user service to MongoDB for flexible schema",
"context": "Team: Backend, Project: AUTH"
}
Response (conflicting):
{
"status": "conflicting",
"confidence": 0.91,
"relevant_decisions": [
{
"id": "d-42",
"title": "Use PostgreSQL for all services",
"summary": "Team decided Postgres is our only data store",
"status": "active",
"similarity": 0.87
}
],
"conflicts": [
{
"decision_id": "d-42",
"title": "Use PostgreSQL for all services",
"reason": "Proposes MongoDB which contradicts the Postgres-only decision",
"severity": "critical",
"suggested_resolution": "Review decision d-42 before proceeding"
}
],
"message": "This action conflicts with 1 existing decision(s). Review the conflicts before proceeding."
}
REST API: POST /alignment/check
For CI pipelines, custom bots, and direct integrations:
curl -X POST https://api.align.tech/alignment/check \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "x-tenant-id: <YOUR_TENANT_ID>" \
-H "Content-Type: application/json" \
-d '{
"action_type": "pull_request",
"content": "Add MongoDB driver to user service",
"context": "Relates to JIRA-1234"
}'
Response Shape
| Field | Type | Description |
|---|---|---|
status | aligned | conflicting | no-context | Alignment result |
confidence | 0-1 | Confidence in the result |
relevant_decisions | array | Related decisions found |
conflicts | array? | Present only when status=conflicting |
message | string | Human-readable summary for agents |
Status Meanings
| Status | Meaning | Agent should... |
|---|---|---|
aligned | Action aligns with or is unrelated to existing decisions | Proceed normally |
conflicting | Action contradicts one or more existing decisions | Surface conflicts to user, request confirmation |
no-context | No relevant decisions found | Proceed - consider documenting the decision afterward |
Action Types
action_type | Use for |
|---|---|
jira_ticket | Jira issue creation |
pull_request | GitHub/GitLab PR description |
slack_message | Slack or Teams messages |
commit_message | Git commit messages |
general | Any other action |
Performance
- Fast path (vector search only): ~300-500ms - returned when no strong match found
- LLM path (conflict confirmed): ~1.5-2.5s - returns specific conflict details
- Hard LLM timeout: 1.5s - falls back to
alignedwithconfidence: 0.5if exceeded
Agent Integration Example
When an agent is about to create a Jira ticket:
const check = await mcp.callTool('align.check_proposed_action', {
action_type: 'jira_ticket',
content: `${ticket.title}\n\n${ticket.description}`,
context: `Project: ${ticket.project}`,
});
if (check.status === 'conflicting') {
return `This ticket conflicts with existing decisions:\n${
check.conflicts.map(c => `- ${c.title}: ${c.reason}`).join('\n')
}\n\nProceed anyway?`;
}
Setup
The align.check_proposed_action tool is available on the Align MCP server (port 8089). Connect your AI assistant to the Align MCP server - see the AI Assistants (MCP) guide for setup instructions.
For direct REST API access, authenticate with your API key and tenant ID as shown in the REST example above.