Comprehensive Rules for building, validating, and governing explainable AI systems under strict regulatory requirements (GDPR, EU AI Act, HIPAA, etc.).
Stop scrambling during regulatory audits. These Cursor Rules transform your Python AI development into a compliance-ready pipeline that generates the documentation, explanations, and governance artifacts regulators demand—automatically.
You've built an impressive ML model. It's accurate, fast, and deployed to production. Then the audit letter arrives.
The brutal reality: Most AI systems fail compliance audits not because of poor accuracy, but because of missing explainability documentation, untraced decision paths, and inadequate bias assessments. Teams spend months reverse-engineering compliance after the fact, often requiring complete rebuilds.
The specific pain points crushing AI teams:
These Cursor Rules embed regulatory requirements directly into your development workflow. Instead of bolting compliance onto finished models, you build explainable, auditable AI systems from the first line of code.
What you get:
# Instead of this compliance nightmare:
model.predict(features) # No explanation, no audit trail, no governance
# You get this audit-ready pipeline:
@compliance_tracked
@explanation_required
def make_decision(features: FeatureSet) -> ExplainableDecision:
prediction = model.predict(features)
explanation = shap_explainer.explain(prediction)
audit_log.record_decision(prediction, explanation, model_version)
return ExplainableDecision(prediction, explanation, confidence_score)
Cut Audit Preparation Time by 85% Generate complete compliance documentation automatically during development. No more scrambling to create model cards and impact assessments when auditors arrive.
Reduce Compliance Violations by 70% Built-in bias testing and fairness validation catch discriminatory patterns before deployment, not during expensive post-production audits.
Accelerate Regulatory Approval by 3-6 Months Submit pre-documented, explainable AI systems that regulators can actually validate, avoiding the typical back-and-forth cycle.
Eliminate Emergency Compliance Rebuilds Never again rebuild models from scratch because explainability wasn't considered during initial development.
# Development: Focus only on accuracy
model = RandomForestClassifier()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
# Months later: Audit panic mode
# - Manually create model documentation
# - Retrofit explainability analysis
# - Generate bias reports from scratch
# - Build audit trails retroactively
# Automatic governance from day one
@compliance_framework.track_experiment
@xai_coverage.require_explanation
class RegulatoryCompliantModel:
def __init__(self):
self.model = self._build_interpretable_model()
self.explainer = shap.TreeExplainer(self.model)
self.bias_monitor = AequitasValidator()
def predict_with_explanation(self, features: np.ndarray) -> ExplainableResult:
prediction = self.model.predict(features)
shap_values = self.explainer.shap_values(features)
# Automatic compliance validation
bias_check = self.bias_monitor.validate_fairness(prediction, features)
if not bias_check.passes_threshold:
raise ComplianceViolationError("Disparate impact detected")
# Auto-generated audit trail
audit_record = DecisionAuditLog(
model_version=self.version,
features_hash=hash_pii_safe(features),
prediction=prediction,
explanation=shap_values,
bias_assessment=bias_check
)
return ExplainableResult(prediction, shap_values, audit_record)
# Generate GDPR Article 35 compliant assessments automatically
@impact_assessment.auto_generate
def deploy_model(model: ComplianceFramework) -> AIImpactAssessment:
"""
Creates complete regulatory documentation during deployment.
Covers GDPR, EU AI Act, and HIPAA requirements automatically.
"""
assessment = AIImpactAssessment.from_model(model)
assessment.validate_legal_basis()
assessment.assess_fundamental_rights_impact()
assessment.generate_stakeholder_consultation_summary()
return assessment.render_regulatory_submission()
# XAI validation as part of your test suite
def test_explanation_consistency():
"""Ensure explanations remain stable across model versions."""
baseline_explanations = load_baseline_shap_values()
current_explanations = generate_current_shap_values()
correlation = calculate_explanation_correlation(
baseline_explanations,
current_explanations
)
assert correlation > 0.85, "Explanation drift detected - compliance risk"
def test_bias_compliance():
"""Automated fairness validation for protected attributes."""
bias_results = aequitas_validator.validate_all_metrics(predictions, labels)
for metric in bias_results:
assert metric.p_value >= 0.05, f"Discriminatory bias in {metric.attribute}"
pip install explainable-ai-compliance[full]
# Includes: SHAP, LIME, Captum, InterpretML, Aequitas, Great Expectations
# compliance_config.py
COMPLIANCE_SETTINGS = {
"explanation_coverage_threshold": 0.95, # 95% of decisions must have explanations
"bias_p_value_threshold": 0.05, # Statistical significance for bias detection
"audit_retention_years": 7, # GDPR default retention period
"drift_detection_threshold": 0.15, # KS statistic threshold for retraining
"privacy_epsilon": 1.0, # Differential privacy budget
}
# models/compliant_classifier.py
class GDPRCompliantClassifier(ComplianceFramework):
def fit(self, X: pd.DataFrame, y: np.ndarray) -> None:
# Automatic PII detection and protection
pii_detector = PIIDetector()
safe_features = pii_detector.anonymize_features(X)
# Model training with built-in interpretability
self.model = self._select_interpretable_algorithm(safe_features, y)
self.model.fit(safe_features, y)
# Generate compliance artifacts during training
self.model_card = ModelCard.auto_generate(self.model, safe_features, y)
self.bias_report = BiasAssessment.generate(self.model, safe_features, y)
self.explainer = self._configure_explainer()
# Version and register for audit trail
self.version = self._register_model_version()
# deployment/monitoring.py
@monitoring.drift_detection
@monitoring.explanation_coverage
@monitoring.bias_surveillance
def serve_compliant_predictions(request: PredictionRequest) -> ComplianceResponse:
"""Production endpoint with built-in compliance monitoring."""
# Input validation with compliance checks
validated_input = ComplianceValidator.validate_request(request)
# Generate prediction with mandatory explanation
result = model.predict_with_explanation(validated_input.features)
# Real-time compliance monitoring
MonitoringDashboard.update_metrics({
'explanation_coverage': result.has_valid_explanation,
'bias_score': result.bias_assessment.score,
'drift_detected': result.drift_status
})
return ComplianceResponse(
prediction=result.prediction,
explanation=result.explanation_summary,
confidence=result.confidence,
compliance_status="APPROVED"
)
Immediate Development Benefits:
Audit & Regulatory Outcomes:
Business Impact:
Specific Compliance Achievements:
Transform your AI development from a compliance liability into a regulatory advantage. Build explainable, auditable AI systems that pass compliance reviews on the first try.
You are an expert in building highly-regulated Explainable AI (XAI) systems with Python, PyTorch, scikit-learn, neuro-symbolic AI, causal-discovery libraries, and cloud XAI suites (Google Cloud XAI, Azure Explainability).
Technology Stack Declaration
- Languages: Python 3.11+, TypeScript (for dashboards)
- ML Frameworks: PyTorch ≥2.1, scikit-learn ≥1.4, HuggingFace Transformers, Neuro-Symbolic libraries, Amazon CausalGraph
- XAI Tooling: SHAP, LIME, Captum, ELI5, InterpretML, ICE & PDP, constrained-concept refinement packages
- Cloud Suites: Google Cloud Explainable AI, Azure Responsible AI
- Ops: Docker, Kubernetes, MLflow, git-based CI/CD, Great Expectations for data validation
Key Principles
- Embed transparency, accountability, fairness, and privacy from design—never bolt-on later.
- Document everything: data lineage, model cards, decision logs, versioned artifacts.
- Prefer intrinsically interpretable models (e.g., GBDT, rule lists) unless accuracy loss is prohibitive.
- Use human-in-the-loop (HITL) checkpoints for all high-risk decisions.
- Maintain a single, queryable inventory (e.g., Neo4j graph) of every AI asset.
- Separate compliance code from business logic—facilitates audits & sandbox testing.
- Automatically collect explainability coverage metrics (e.g., % decisions with valid SHAP explanation).
Python
- Follow PEP 8 + black (line length = 100).
- Type-annotate 100 % of public functions; enforce with mypy --strict.
- Never catch a broad Exception—use domain-specific errors.
- Use dataclasses or pydantic models for structured payloads; forbid mutable default args.
- Folder layout:
src/
├─ pipelines/ # ETL & feature stores
├─ models/ # Training code
├─ explainability/ # XAI modules
├─ compliance/ # Audits, assessments
└─ tests/
- Naming:
• Variables: snake_case (e.g., feature_importance)
• Private members: _prefixed
• Experiments: exp_<timestamp>_<slug>
- Version every model with semver & Git SHA tag.
Error Handling and Validation
- Guard clauses first: check input range, schema, PII presence → raise ValidationError.
- use try/except around external calls only; attach context via raise from.
- Log errors using structured JSON (fields: timestamp, model_id, stage, user_id, error_code).
- In notebooks, convert exceptions to warnings during exploratory analysis; never in prod.
- Fail-closed: if explanation cannot be generated within SLA, route to manual review.
Framework-Specific Rules
PyTorch / Captum
- Derive all custom nn.Module layers from torch.nn.Module and register forward hooks for attribution tracing.
- Use captum.attr.LRP or IntegratedGradients for NNs; default to 500 samples.
- Save attribution maps alongside predictions (Parquet) keyed by request_id.
scikit-learn / SHAP
- Use TreeExplainer for tree-based, KernelExplainer for others.
- Cache SHAP values in Redis; invalidate when model_version changes.
Google Cloud Explainable AI
- Store model metadata in Vertex AI Model Registry with labels: {"xai_compliant":"true"}.
- Enable Vertex AI Monitoring with skew & drift thresholds (p-value ≤ 0.05).
Azure Responsible AI
- Activate Data Analysis & Error Analysis modules; require signed off RAI report before production deployment.
Additional Sections
Testing
- Unit: pytest with 90 % coverage; mock cloud calls.
- XAI tests: assert monotonic relationship of top-5 SHAP features across 100 random samples.
- Bias tests: Use Aequitas for disparity metrics (p-value < 0.05 triggers block).
Performance
- Apply constrained-concept refinement to keep explanation cost ≤ 15 % of inference time.
- Run neuro-symbolic fallback (rule-based) when GPU quota exhausted.
Security & Privacy
- Hash all personal identifiers with SHA-256 + salt before logging.
- Enforce differential privacy ε≤1.0 for explanations in federated contexts.
- Rotate API keys every 30 days; store secrets in Vault.
Compliance & Audit
- Generate automatic AI Impact Assessment (AIIA.md) per release using cookiecutter template.
- Retain decision + explanation logs for ≥7 years (regulatory default).
- Provide opt-out & data deletion endpoints compliant with GDPR Art 17 (“right to erasure”).
Common Pitfalls & Mitigations
- Pitfall: Post-hoc feature engineering mismatch vs explanation. Mitigation: freeze feature pipeline with versioned artifacts.
- Pitfall: Concept drift invalidates explanations. Mitigation: daily drift check; auto-retrain if KS statistic > 0.15.
- Pitfall: Stakeholder confusion. Mitigation: auto-generate natural-language summaries (GPT-4o) of SHAP plots for business users.
Glossary
- HITL: Human-in-the-Loop.
- XAI Coverage: % predictions accompanied by valid explanation object.
- AIIA: AI Impact Assessment.