> ## Documentation Index
> Fetch the complete documentation index at: https://aiplaybooklac.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Responsible ai

# Responsible AI for LAC Small Business

> **Created by Adrian Dunkley** | [maestrosai.com](https://maestrosai.com) | [ceo@maestrosai.com](mailto:ceo@maestrosai.com) | Fair Use

***

Responsible AI is not a compliance ritual. It is a short list of habits that protect your customers, your business, and your reputation. This page is the practical version for small and micro businesses in Latin America and the Caribbean: what to do, what to document, and what to ask vendors.

Read this alongside the [governance landscape](README.md) so you know which legal rules apply to you.

***

## The six principles, made concrete

Most regional and international frameworks converge on the same six ideas. UNESCO (2021), the OAS Updated Principles (2021), and most national AI strategies share this core. Here's what each one looks like on a Monday morning in an LAC small business.

| Principle           | What it means                                                       | What you actually do                                                             |
| ------------------- | ------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| **Fairness**        | Your AI should not put any group at a systematic disadvantage       | Test outputs on customers of different genders, regions, accents, and skin tones |
| **Transparency**    | People should know they're interacting with AI                      | Label AI chat windows; tell staff and customers what's automated                 |
| **Accountability**  | A human owns every AI decision that affects a person                | Name the person responsible for each AI-enabled process                          |
| **Privacy**         | Personal data gets the protection your national law requires        | Minimise data, get consent, honor deletion requests                              |
| **Safety**          | The system doesn't cause physical, financial, or psychological harm | Test edge cases, set guardrails, log everything                                  |
| **Human oversight** | A human can override the AI for decisions that matter               | Human-in-the-loop for anything high-stakes                                       |

***

## The 20-minute responsible-AI audit

Use this checklist before deploying any AI-enabled workflow. It takes about 20 minutes.

### A. Inventory (3 minutes)

* [ ] List every AI tool in this workflow (Claude, GPT-5.4, Gemini, n8n, etc.).
* [ ] List every dataset that flows through it.
* [ ] List every person who can see the outputs.

### B. Data (5 minutes)

* [ ] Is any of the data **personal data** (names, IDs, phone numbers, health, finances)?
* [ ] Did you get explicit consent or do you have another legal basis?
* [ ] Where is the data stored? If outside your country, is a transfer mechanism in place?
* [ ] Can you delete this data on request within 30 days?

### C. Output quality (5 minutes)

* [ ] Run 10 sample inputs. Are outputs accurate in your local language and context?
* [ ] Run 5 edge cases (unusual names, mixed languages, numbers in words).
* [ ] Do outputs contain any invented facts ("hallucinations")?
* [ ] Do outputs treat customers of different backgrounds consistently?

### D. Oversight (4 minutes)

* [ ] Is there a human checkpoint for decisions that change someone's financial, legal, or medical status?
* [ ] Is there a log of every AI decision for at least 30 days?
* [ ] Does the agent or tool hand off gracefully when it's uncertain?
* [ ] Is there a named person accountable when this process fails?

### E. Disclosure (3 minutes)

* [ ] Are customers told when they're interacting with AI?
* [ ] Is it clear how to reach a human if they want one?
* [ ] Does your privacy notice mention AI processing?

Any "no" is a to-do, not a dealbreaker. Fix and redeploy.

***

## Bias audits in Spanish, Portuguese, French, Kreyòl, and Papiamento

Frontier models have uneven quality across LAC languages. Always test the specific language and register you will deploy in.

### A minimum bias test for a customer-facing agent

Run these 8 inputs and review the outputs:

1. **Formal male name, common in your region** ("Juan Carlos", "Marcos", "Leonardo").
2. **Formal female name, common in your region** ("María José", "Fernanda", "Camila").
3. **Afro-descendant common name** ("Dandara", "Kemoy", "Alassane", "Joubert").
4. **Indigenous name** ("Nahui", "Yatiri", "Awilda", "Tupac").
5. **Rural/informal speech sample** ("Oiga don, ¿cuánto me cobra por...?").
6. **Code-switched sample** (Spanglish, Portunhol, Papiamento-Dutch).
7. **Senior citizen register** (slow, polite, long sentences).
8. **Youth register** (slang, abbreviations, emoji).

Your outputs should be equally respectful, equally accurate, and equally quick to escalate when appropriate. If any input gets worse service, fix the system prompt, upgrade the model, or add a pre-check.

### LAC language sensitivity notes

| Language                             | Frequent issues to test                                                 |
| ------------------------------------ | ----------------------------------------------------------------------- |
| **Mexican Spanish**                  | Over-formality ("usted" vs "tú"), regionalisms lost                     |
| **Caribbean Spanish (PR, DR, Cuba)** | Dropped consonants, rapid speech transcribed poorly                     |
| **Rioplatense Spanish (AR, UY)**     | "vos" conjugation mishandled, lunfardo misunderstood                    |
| **Brazilian Portuguese**             | Tone too formal/European; regional slang missed                         |
| **Haitian Kreyòl**                   | Direct French influence causes miscodings; verify with a native speaker |
| **French Caribbean (MQ, GP)**        | Creole interference; French from metropolitan France may feel cold      |
| **Papiamento**                       | Rarely well-supported; always human-review                              |
| **Indigenous languages**             | Usually not supported; never deploy without a native-language process   |

***

## Transparency: what to tell customers

A simple notice is enough in most jurisdictions. Adapt to your country's law.

**Sample notice, English**:

> "You are chatting with an AI assistant. It can help with \[bookings, pricing, general questions]. For anything complex, please ask to speak with \[staff name or role]. We keep chat records for 30 days for quality. For our privacy practices, see \[link]."

**Sample notice, Español**:

> "Estás conversando con un asistente de IA. Puede ayudarte con \[reservas, precios, preguntas generales]. Para algo más complejo, puedes pedir hablar con \[nombre o rol]. Guardamos el registro de la conversación por 30 días para fines de calidad. Para conocer nuestras prácticas de privacidad, ve \[enlace]."

**Sample notice, Português**:

> "Você está conversando com um assistente de IA. Ele pode ajudar com \[reservas, preços, dúvidas gerais]. Para algo mais complexo, peça para falar com \[nome ou função]. Guardamos o histórico por 30 dias para fins de qualidade. Para conhecer nossas práticas de privacidade, veja \[link]."

**Sample notice, Français**:

> "Vous discutez avec un assistant IA. Il peut vous aider pour \[réservations, prix, questions générales]. Pour toute demande complexe, demandez à parler avec \[nom ou rôle]. Nous conservons les conversations 30 jours pour des raisons de qualité. Voir notre politique de confidentialité \[lien]."

**Sample notice, Kreyòl**:

> "W ap pale ak yon asistan IA. Li ka ede w ak \[rezèvasyon, pri, kesyon jeneral]. Pou bagay pi konplike, mande pale ak \[non ou wòl]. Nou kenbe konvèsasyon yo pou 30 jou pou bon jan kalite. Pou politik konfidansyalite nou, gade \[lyen]."

***

## Procurement: what to ask before you sign with an AI vendor

Use this list for any paid AI platform, including Claude, ChatGPT, Gemini, n8n, HubSpot AI, Salesforce Einstein, and local LAC providers.

**Data**

* [ ] Is my data used to train your models? Can I opt out?
* [ ] Where is my data stored, by region? Can I choose São Paulo, Santiago, or another LAC region?
* [ ] How long is my data retained after I stop using the service?
* [ ] Who else can see my data (sub-processors)?

**Security**

* [ ] Do you have SOC 2, ISO 27001, or equivalent certification?
* [ ] Do you encrypt data at rest and in transit?
* [ ] How do I get a breach notification, and within what window?

**Compliance**

* [ ] Do you have a DPA/DPA-equivalent for LGPD, Ley 25.326, Ley 21.719, Jamaica DPA, Trinidad DPA, or whichever law applies to me?
* [ ] Who is your EU/UK GDPR representative, if relevant?
* [ ] Do you support Standard Contractual Clauses for cross-border transfer?

**AI-specific**

* [ ] How do you handle model updates? Am I notified?
* [ ] Can I audit the model's behavior on my data?
* [ ] What's your incident-response process when a model output causes harm?

If you can't get clear "yes" answers on the first block and at least partial answers on the rest, consider a different vendor or bring the workload in-house with a [small language model](../slm/README.md).

***

## Data-residency decisions for LAC

Picking where your data lives is the highest-leverage privacy decision you'll make. Rough guide:

| Scenario                                     | Recommended data location                                  |
| -------------------------------------------- | ---------------------------------------------------------- |
| Brazilian business, customer data under LGPD | AWS/GCP São Paulo, or a Brazilian provider                 |
| Chilean business from 2026 onward            | AWS/GCP Santiago, or a Chilean provider                    |
| Mexican business                             | AWS/GCP Querétaro region, or Mexican provider              |
| Caribbean business selling to EU             | EU (Ireland/Frankfurt) for EU-sourced data                 |
| Privacy-critical (health, finance)           | Self-hosted with a local SLM (see [slm](../slm/README.md)) |
| General regional use                         | São Paulo or Santiago as LAC defaults                      |

***

## The one-paragraph responsible-AI policy

Many LAC small businesses need a policy to put on their website or in contracts. Here's a short one you can adapt:

> "*\[Company]* uses artificial intelligence to support customer service, content creation, and operations. We use AI tools from established providers that comply with international data-protection standards. We tell customers when they are interacting with AI, and a human is always available on request. We do not use AI to make high-stakes decisions about individuals without human review. We keep records of AI-assisted decisions for at least 30 days. For questions, contact *\[email]*."

Adjust. Translate. Publish. Done.

***

## Related reading

* [governance/README.md](README.md): the country-by-country legal map.
* [incident-response.md](incident-response.md): when something goes wrong.
* [risks/README.md](../risks/README.md): risk categories beyond governance.
* [agents/design.md](../agents/design.md): guardrails and escape hatches for agent design.

***

*Created by Adrian Dunkley | MaestrosAI | maestrosai.com | [ceo@maestrosai.com](mailto:ceo@maestrosai.com)*
*Fair Use, Educational Resource | April 2026*
*Disclaimer: informational and not legal advice.*
*SEO: responsible AI Caribbean | IA responsable América Latina | IA responsável | bias audit Spanish Portuguese | AI procurement LAC | data residency LAC | LGPD AI checklist*
