LLM Security Evaluation Expert

Posted 5ds ago

Employment Information

Education
Salary
Experience
Job Type

Report this job

Job expired or something wrong with this job?

Job Description

LLM Security Evaluation Expert responsible for testing and ensuring the security of AI systems. Focused on adversarial prompt attacks and vulnerability assessments in SilverEdge's AI operations.

Responsibilities:

  • Test the security and integrity of Large Language Models (LLMs)
  • Design and execute adversarial prompt attacks
  • Develop prompts targeting LLM vulnerabilities
  • Systematically test LLMs against adversarial prompts
  • Analyze LLM responses for security weaknesses

Requirements:

  • Strong knowledge of how LLMs work
  • Familiarity with prominent LLM families
  • Proven experience in crafting and refining prompts
  • Demonstrable understanding of prompt injection techniques
  • Strong understanding of cybersecurity principles
  • Ability to think like an attacker
  • Excellent ability to analyze complex systems
  • Clear and concise written and verbal communication skills
  • Understanding of ethical implications of AI security
  • Commitment to responsible testing practices
  • Prior experience in AI red teaming or LLM security roles
  • Familiarity with LLM security evaluation frameworks
  • Knowledge of common LLM fine-tuning and alignment techniques
  • Contributions to the AI security community

Benefits:

  • N/A