Responsible AI
Last updated: 19th November 2025
This page explains how Aneo builds and operates AI features in a safe, privacy‑respecting, and business‑ready way. It applies to aneo.io and all subdomains and aliases, for example app.aneo.io, api.aneo.io, docs.aneo.io, status.aneo.io, and any future subdomains under *.aneo.io.
Our AI assists your teams. It does not replace human judgment, legal counsel, or accredited audits.
Our principles
Useful: solve clear customer problems with measurable value.
Safe: apply guardrails, abuse prevention, and monitored operations.
Private: respect data minimization and give customers control.
Transparent: explain what AI does and where its limits are.
Accountable: log, review, and improve based on evidence.
Fair: reduce bias and test for unintended outcomes.
Where we use AI
Clause‑Review
Finds clauses that may not be in your favor, highlights risks, and drafts plain‑language suggestions. You approve every change. No legal advice.Incident AI
Helps create and triage tickets, suggests likely cause and next steps, summarizes long threads, and drafts a root cause analysis. You decide and execute the fix.Framework‑Pro
Guides you through a questionnaire to select a framework (ISO 27001 or NIST CSF), suggests relevant controls, and generates policy drafts tied to the chosen controls. You review and finalize.
What AI does not do
AI outputs are informational. They are not legal advice, a security guarantee, or a compliance certification.
Your controls and choices
Zero‑retention option
On supported plans you can enable a mode where certain AI prompts and outputs are not retained beyond transient processing.EU data residency
Core product data is processed in EU regions. See details in the Data Processing Agreement.No training on your content without opt‑in
Customer content is not used to train foundation models unless you explicitly opt in.Data export and deletion
Export tools are available during your subscription. After termination, data is deleted on a schedule. See the Terms of Service and DPA.
Data handling for AI
Inputs and outputs
Prompts, documents, and generated outputs are processed to deliver the requested feature. We apply encryption in transit and at rest.Model providers
We use vetted providers for inference. Current providers are listed on Sub‑processors.Access
Access to customer content is restricted by role and least‑privilege controls. Production access requires MFA and is logged.Telemetry and logs
We collect limited operational data to secure and operate the service. Retention is time‑bound and described in the DPA.
Safety and quality
Guardrails
Input and output filtering, rate limits, and abuse detection reduce harmful content and prompt injection.Human in the loop
Workflows require human review for material actions, policy text, and incident changes.Evaluation
We use test sets and human review to assess accuracy, clarity, and bias. Regressions are blocked from release.Incident response
Security events follow a documented process with timely customer notification. See Security Overview.
Customer responsibilities
Review AI outputs before acting.
Do not enter prohibited or highly sensitive data unless agreed in writing.
Do not use the products for life‑critical or other high‑risk contexts where failure could cause injury or severe harm.
Follow the Acceptable Use Policy.
Known limitations
AI can be wrong, outdated, or incomplete. It may miss context or misinterpret ambiguous text. Always apply professional judgment and verify with source documents or trusted references.
Reporting issues
If you notice unsafe behavior, bias, privacy concerns, or any other issue, contact us at support@aneo.io or open a support ticket. We investigate, track remediation, and update this page when controls change.
Related documents
Changes
We may update this page as our safeguards evolve. The “Last updated” date shows the current version. Continued use after changes means you accept the updated page.
