AI Coding Shield
Security auditing tool for AI development workflows, rules, skills, and MCPs that prevents malicious code execution and risky practices.
The Problem
During a discussion with colleagues about AI, we realized the surge in usage of rules, workflows, and MCPs (Model Context Protocol). We are shifting to a model where we blindly download and use these tools—often from unverified sources—without verifying their safety.
These practices enable rapid code creation but also unsupervised code execution. A simple prompt or agentic workflow could inadvertently install malicious packages, exfiltrate secrets, or grant root access. This sparked the idea of a “shield” to monitor and prevent potential risks before they happen.
The Solution
AI Coding Shield is a static analysis tool designed to inspect components of the AI development lifecycle—Skills, Rules, Workflows, and MCPs—to prevent malicious code or dangerous practices.
Flexibility & Extensibility
The core of AI Coding Shield is its highly flexible, rule-based architecture. It is not a black box; it operates on a config/threats.yaml file where you can:
- Define Custom Rules: Create specific patterns to detect internal policy violations.
- Tune Sensitivity: Adjust severity levels or add “context escalators” based on your environment.
- Configure Trust: Whitelist specific authors or domains for MCPs.
This ensures that the tool adapts to your specific security posture, whether you are a solo developer or an enterprise enforcing strict compliance.
Features
- Workflow Scanning: Audits
.github/workflowsand shell scripts for dangerous patterns. - MCP Security: Verifying MCP servers against trusted authors and domains.
- Risk Detection: Finds dangerous capabilities like root exposure, promiscuous tools, and insecure networking.
- Threat Detection: Identifies command injection, data exfiltration, suspicious package installations, obfuscated code, and persistence mechanisms.
- Reporting: Generates HTML/JSON output for CI/CD integration.
Installation
The project is designed to be easily integrated into your workflow.
Option 1: Cargo (Recommended for Rust users)
If you have Rust installed, this is the easiest way to install and keep updated:
cargo install ai-coding-shield
Option 2: Agentic Skill (via npx/skill)
You can install the CLI as a skill for your AI agent. This allows your agent to self-audit its own tools or the ones it creates.
npx skills add https://github.com/ai-coding-shield/ai-coding-shield --skill ai-coding-shield
See the skill/skill.md in the repository for detailed agent integration instructions.
Option 3: CI/CD Integration (GitHub Action)
A dedicated GitHub Action is available to automatically scan your repository on every push or pull request. This ensures that no risky code or configuration enters your main branch.
- uses: AI-Coding-Shield/ai-coding-shield@v1
with:
path: .agent/
fail-on: critical
Advanced Configuration
The config/threats.yaml file gives you full control over the security engine. It’s not just about turning rules on or off—you can define granular logic:
- Custom Regex Rules: Define patterns to catch internal leakage (e.g.,
INTERNAL_API_KEY). - Context Escalators: Increase risk scores if specific conditions are met (e.g., if a script has an
auto-runflag). - Trusted Entities: Explicitly whitelist trusted MCP authors or domains to avoid false positives while blocking unknown sources.
Why It Matters
As AI agents become more autonomous, they will increasingly interact with third-party tools, APIs, and workflows without direct human oversight. Without a proper security layer, this evolution introduces significant risk.
AI Coding Shield acts as that critical layer of defense, ensuring that you can embrace the future of agentic workflows with confidence. By preventing complex security issues before they occur, it allows you to focus on building innovation, knowing that your infrastructure is protected against the unforeseen risks of automated code execution.
Project Gallery

Key Metrics
Security Controls
250+
License
MIT