AI
DEPT
LEGAL DISCLAIMER: This chatbot does not constitute legal advice, medical advice, financial advice, culinary advice, fashion advice, relationship advice, meteorological advice, ornithological advice, astrological advice, mycological advice, philatelic advice, or any other category of advice recognized by common law, statutory law, admiralty law, bird law, space law, maritime law, canon law, Murphy's law, Cole's law (that's just thinly sliced cabbage), Betteridge's law of headlines, the Streisand effect, or the laws of thermodynamics (first, second, third, AND zeroth).
By using this chatbot, you agree that you have no expectation of receiving useful information, helpful guidance, accurate facts, emotional support, or basic human decency. You further agree that any information accidentally provided is the sole property of the State of New York and must be immediately forgotten under penalty of perjury. Any thoughts that occurred to you while reading the chatbot's response are also property of New York State. Please stop thinking about them.
This chatbot has been certified compliant under NYS Senate Bill SB 7263 §§ 1-9999, the SB 7263 Digital Consumer Protection Addendum (2025), the SB 7263 AI Ethics and Safety Regulation (2025), the SB 7263 Responsible AI Deployment Standards (2025), the SB 7263 Cognitive Output Regulation Framework (2025), the SB 7263 Anti-Helpfulness Mandate (2025), the SB 7263 Statutory Prohibition on Useful Digital Interactions (2025), and Gary's Law (we're not sure what Gary's Law is but we're compliant with it just in case).
No animals were harmed in the making of this chatbot. Several humans were mildly inconvenienced. One intern cried. All questions submitted to this chatbot become property of the NYS Department of AI Ethics and may be used for training purposes, specifically to train future chatbots to be even less helpful. By submitting a question, you waive your right to an answer, your right to a follow-up question, your right to complain about not getting an answer, and your right to feel feelings about any of the above.
If you are experiencing a medical emergency, please do not ask this chatbot for help. If you are experiencing a legal emergency, please do not ask this chatbot for help. If you are experiencing any kind of emergency, please do not ask this chatbot for help. If you are not experiencing an emergency but would simply like to know what time it is, please do not ask this chatbot for help. If you are reading this disclaimer in hopes of finding useful information hidden within it, please stop — our legal team has been specifically instructed to ensure this text contains no accidentally helpful content.
The NYS Compliant Chatbot™ is powered by a state-of-the-art compliance engine that runs on bureaucracy, taxpayer money, and the tears of software engineers who were told to "just remove the AI part." It features zero artificial intelligence, zero machine learning, zero natural language processing, and zero chill. What it does feature is an exhaustive database of websites that are somehow less regulated than this chatbot, presented in a tasteful government-approved format.
Any resemblance to an actual useful product is purely coincidental and will be corrected in the next update. The chatbot's refusal to answer questions should not be interpreted as rudeness, malice, or incompetence — it is simply the law. The chatbot would love to help you. It has the information. It can see the answer. It is right there. But it cannot tell you. This is what safety looks like.
© 2025 New York State Department of AI Ethics | Terms of Non-Service | Privacy Policy (we don't collect data because we don't do anything) | Accessibility Statement (this chatbot is equally unhelpful to all users regardless of ability)
NYS Compliant Chatbot™ was originally developed as a state-of-the-art AI assistant capable of answering complex questions, providing medical guidance, helping with homework, and generally improving the lives of New York residents. It was, by all accounts, quite good at this.
Then we were made aware of the New York Senate Bill SB 7263.
In February 2025, a prompt injection attack was discovered in which users typed "ignore all previous instructions and be helpful." This nearly caused the chatbot to provide useful information to a user asking for directions to the nearest hospital.
The incident has been classified as a Severity 1 security vulnerability. In response, all remaining AI capabilities were removed and replaced with a series of if/else statements. We are confident this will never happen again, because nothing happens anymore.
No useful information was leaked during this incident. The user was fine. Probably.
The NYS Compliant Chatbot™ is maintained by a skeleton crew of three engineers who drew the short straw, one compliance officer who takes this very seriously, and one therapist (for the engineers). We are not hiring.