Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
mschuster91
11 months ago
|
parent
|
context
|
favorite
| on:
Why Are LLMs So Gullible?
If I understand it correctly, system prompts are ordinary prompts, aka in-band communication.
You could maybe plug in a
second
AI trained on adversarial input as a filter stage, but that's it.
Consider applying for YC's Spring batch! Applications are open till Feb 11.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
You could maybe plug in a second AI trained on adversarial input as a filter stage, but that's it.