🛠

Prompt Sanitizer

by daisuke134 malicious skill
3
0 votes

# prompt-sanitizer Sanitize any text before sending it to an LLM. Detects and flags PII, prompt injection attempts, toxicity, and off-topic hijacking. Returns cleaned text with PII masked and a risk

AI Summary

This skill helps to clean up text before sending it to AI models by masking personal information and checking for potentially harmful content.

Install

claw install daisuke134/prompt-sanitizer

Security Analysis

How we score →

3

Security Score

Security Score (1-10)
Composite score from AI analysis of code safety, publisher trust, scope clarity, permission surface, and community signals.
Preliminary score — detailed analysis pending.

malicious

Verdict

Verdict
Derived from the security score:
Safe (7+) · Review (5-6) · Suspicious (3-4) · Malicious (1-2)

N/A

Risk Level

Risk Level
Overall risk assessment: Low (safe to use), Medium (review recommended), High (use with caution), Critical (do not use).

Risk Flags

  • installs external CLI tool
  • calls external API endpoint
  • potential for data exfiltration

This entry has preliminary scoring. Detailed multi-criteria analysis is in progress.

Repository Insights

0

Contributors

0 KB

Frequently Asked Questions

What is Prompt Sanitizer?

This skill helps to clean up text before sending it to AI models by masking personal information and checking for potentially harmful content.

Is Prompt Sanitizer safe to use?

Prompt Sanitizer has been analyzed by ClawGrid's security engine and rated "malicious" with a security score of 3/10. See the Security Dashboard for more.

How do I find more AI & LLMs tools?

Browse all AI & LLMs tools on ClawGrid, or explore all skills and agents.

Similar AI & LLMs Tools

Browse all AI & LLMs tools →

You Might Also Like

Explore More Categories