Sintérgica AI
Research · Cultural bias in AI

The bias nobody tells you your AI has.

There is a structural bias in language models that frontier AI labs don't put on their landing pages. It's called WEIRD. It has a name, scientific evidence, and measurable consequences in every decision your company makes with AI.

What is WEIRD bias?

In 2010, three researchers found something uncomfortable.

Joseph Henrich, Steven Heine and Ara Norenzayan published a study in Behavioral and Brain Sciences that changed how behavioral science understands itself. Their finding was uncomfortable: the vast majority of what psychology presented as “universal truths” about human behavior came from a very specific type of society.

Western. Educated. Industrialized. Rich. Democratic. The acronym — WEIRD — also means “strange” in English. And that was precisely the point.

W

Western

Western

E

Educated

Formally educated

I

Industrialized

Industrialized

R

Rich

High income

D

Democratic

Liberal democratic

WEIRD populations represent about 15% of humanity. They are the exception, not the norm. Yet science treated them as the rule.

Over a decade later, the same pattern appeared in artificial intelligence. This time, the consequences aren't academic. They're operational.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83.

The science confirms it

Harvard, 2023. 65 countries. 94,278 people.

Harvard researchers (Atari et al., 2023) compared GPT responses with data from real people in 65 countries. The central finding is a correlation that can't be ignored.

Central finding

r = −0.70

Correlation of r = −0.70 between a country's cultural distance from the U.S. and GPT's similarity to its inhabitants.

The more different your culture is from the American one, the less the AI you're using represents you.

The U.S., Canada, Australia and the U.K. are closest to the profile models naturally replicate. Mexico, like most of Latin America, falls in a zone of significantly lower representation. The bias is in the training data and shapes every response.

r = −0.89   p < 0.001Very strong negative correlation↑ More accurate AI(similar to U.S.)Less accurate AI ↓(culturally distant)0.60.70.80.90.000.050.100.150.20GPT-Human CorrelationCultural Distance from the United States
United States
Canada
United Kingdom
Australia
New Zealand
Germany
Japan
Singapore
Argentina
Brazil
Chile
Colombia
Ecuador
Peru
Guatemala
Mexico
Russia
China
Vietnam
Egypt
Pakistan
Nigeria
Ethiopia
Libya
Turkey
Morocco
South Korea
Netherlands
Uruguay
Hong Kong
Andorra
Taiwan
Cyprus
Serbia
Kazakhstan
Thailand
Ukraine
Romania
Iran
Lebanon
Kyrgyzstan
Malaysia
Philippines
Zimbabwe
Indonesia
Armenia
Iraq
Tunisia
Jordan

Key finding

The greater the cultural distance from the U.S., the less accurately AI reflects local human values and reasoning.

Mexico on the chart

Mexico has a GPT-human correlation of 0.72 vs. 0.85 for Anglo-Saxon countries — a 15% gap that impacts every response.

Implication

Global models are not culturally neutral; they are calibrated to respond like a typical U.S. citizen.

Cultural distance vs. similarity with GPT responses. Greater distance, lower representation. · Atari et al., Harvard 2023
What this means for your company

Four areas where bias stops being theory and becomes cost.

The legal, fiscal and regulatory reasoning of a global model is Anglo-Saxon by design. Applied to Mexico, the output sounds technical — but starts from the wrong system.

Legal

A model trained predominantly on Anglo-Saxon data reasons from Common Law — where precedent binds. Mexico operates under codified civil law: the Commercial Code, Federal Civil Code and state legislation set the rules. When the model suggests clauses or interprets contracts, it can apply legal logic from the wrong system.

Tax

SAT tax logic — fiscal regimes, payment complements, CFDI and deductibility rules — has little to do with the IRS or HMRC. A globally trained assistant can confuse concepts, apply criteria from another jurisdiction or recommend strategies that aren't valid in Mexico. The outcome may be an incorrect filing or a fine.

Government

An agent handling LGTAIP requests needs exact deadlines, procedures and regulations from the Mexican legal framework. The general transparency principles a global model knows are a starting point; the operational details that determine whether a request is processed correctly are local.

Health

Clinical protocols, Normas Oficiales Mexicanas and COFEPRIS regulations are Mexico-specific. A model trained predominantly on U.S. or European data may recommend procedures, dosages or classifications that don't match the national regulatory framework.

36%

of global companies reported direct negative impacts from AI bias in 2024 — including loss of revenue, customers and employees.

AI Bias Report, AllAboutAI, 2025

The analogy

It's the difference between a dubbed film and the original.

Dubbed film

Global AI (ChatGPT, Claude, Gemini)

You follow the plot. But the jokes lose their timing, idioms feel forced and cultural references disappear or get awkwardly adapted. The experience is functional but distant. It was never designed for you.

Original version

Lattice Na’at

Lattice Na'at is the original version. Built specifically to close the WEIRD gap in Mexico and Latin America: with Mexican legislation and jurisprudence corpora, culturally appropriate benchmarks, processing on national infrastructure under Mexican law, and pioneering NLP work for indigenous languages.

The response

Lattice Na'at isn't a translation patch. It's a different design.

Na'at is a family of specialized models, trained on Mexican legislation and jurisprudence corpora, evaluated with benchmarks that don't assume Western context, and deployed on national infrastructure under Mexican law.

01

Mexican Regulatory Corpus

Federal and state legislation, jurisprudence, administrative regulation and sector-specific rules — integrated as base knowledge, not as web search. When Na'at answers about Mexican law, it reasons from Mexican law.

02

Non-WEIRD benchmarks

Spanish HELM and MMLU-LatAm evaluate performance in Spanish without assuming Anglo-Saxon context. If a model scores well on MMLU but fails on MMLU-LatAm, WEIRD bias is active.

03

Sovereign processing

Data is processed on infrastructure located in Mexico — AWS Querétaro or the client's own servers. It doesn't cross borders. It isn't subject to the CLOUD Act or foreign jurisdiction.

04

Indigenous languages

Pioneering NLP work for Nahuatl, Maya and other indigenous languages. More than 7.3 million speakers per INEGI 2020 Census. A first step toward AI that represents the whole country.

The impact in Mexico

What changes when AI is designed from the right context.

  1. 01

    Accessible government procedures

    An assistant that understands SAT, IMSS and INFONAVIT as part of its training — not as a web search. Guides any citizen through a bureaucratic process in plain language, regardless of education level.

  2. 02

    Contracts in your legal framework

    Na'at explains a contract using the correct Mexican law — not translations of Anglo-Saxon clauses that may be inapplicable under the local Commercial Code.

  3. 03

    Technological sovereignty

    Your data is processed in Mexico, under Mexican law, on infrastructure your organization controls. No dependence on foreign jurisdictions.

  4. 04

    Inclusion that didn't exist

    For the first time, a systematic effort to make AI work in the languages millions of Mexicans speak — not just the language that dominates the internet.

  5. 05

    Digital inclusion

    Training in indigenous languages (Nahuatl, Maya) as a first step toward AI that represents the 7.3M+ speakers of indigenous languages in Mexico.

The research behind

Sintérgica Labs: the systematic work against WEIRD bias.

Four active research lines make up the mitigation program.

  1. 01

    Non-WEIRD benchmarks

    Spanish HELM and MMLU-LatAm: metrics that don't assume Western context.

  2. 02

    Cultural bias mitigation

    Systematic identification and reduction of WEIRD bias in production models.

  3. 03

    Mexican Regulatory Corpus V1

    Curated dataset of Mexican legislation, jurisprudence and regulation.

  4. 04

    NLP for indigenous languages

    Models and tools for Nahuatl, Maya and other indigenous languages.

Next step

Close the WEIRD gap in your operations.

Book a Smart Diagnosis. In 45 minutes we identify where your current AI's bias is costing you — and how Lattice Na'at solves it with your real data.

Book Smart Diagnosis
45 minutes, no costNo lock-inDemo with your real data