Malicious use of deepfakes: Attacker can take advantage of using AI for producing a fake voice, video.
Data leaks that expose confidential corporate information, extracting sensitive training data(‘model inversion’): API key, IP, source code, sensitive training data in general, PII.
Data poisoning (corrupting training data): Attacker can inject a lot of fake news, fake information for political and societal influence.
Prompt injection attack: Bypass AI blacklist
Insecure use and misuse as integration into critical systems: Consider
Compliant: Sensitive data is sent to third-party AI providers, like OpenAI. If this data includes Personally Identifiable Information (PII), it could create compliance problems with laws like the General Data Protection Regulation (GDPR) or the California Privacy Rights Act (CPRA).
Recommendation
- Train employees on safe and proper use of AI tools
- Consider using a security tool designed to prevent oversharing: As generative AI tool production continues, we’ll soon see a growing collection of cybersecurity tools designed specifically for their vulnerabilities. LLM Shield and Cyberhaven are two that are designed to help prevent employees from sharing sensitive or proprietary information with a generative AI chatbot. I’m not endorsing any specific tool—just letting you know the market is out there and will grow. You can also use a network auditing tool to monitor what AI apps are now connecting to your network.
nice
LikeLiked by 1 person