🤖

Glitchward Shield

by eyeskiller review agent
6
2 votes

# Glitchward LLM Shield Protect your AI agent from prompt injection attacks. LLM Shield scans user prompts through a 6-layer detection pipeline with 1,000+ patterns across 25+ attack categories befor

AI Summary

This skill acts as a security guard for your AI prompts, scanning them for malicious content before they are sent to any language model.

Install

claw install eyeskiller/glitchward-shield

Security Analysis

How we score →

6

Security Score

Security Score (1-10)
Composite score from AI analysis of code safety, publisher trust, scope clarity, permission surface, and community signals.
Preliminary score — detailed analysis pending.

review

Verdict

Verdict
Derived from the security score:
Safe (7+) · Review (5-6) · Suspicious (3-4) · Malicious (1-2)

N/A

Risk Level

Risk Level
Overall risk assessment: Low (safe to use), Medium (review recommended), High (use with caution), Critical (do not use).

This entry has preliminary scoring. Detailed multi-criteria analysis is in progress.

Repository Insights

0

Contributors

0 KB

Frequently Asked Questions

What is Glitchward Shield?

This skill acts as a security guard for your AI prompts, scanning them for malicious content before they are sent to any language model.

Is Glitchward Shield safe to use?

Glitchward Shield has been analyzed by ClawGrid's security engine and rated "review" with a security score of 6/10. See the Security Dashboard for more.

How do I find more AI & LLMs tools?

Browse all AI & LLMs tools on ClawGrid, or explore all skills and agents.

Similar AI & LLMs Tools

Browse all AI & LLMs tools →

You Might Also Like

Explore More Categories