English · Português

Leon: Building a Legally-Constrained, Ethically-Rigorous AI for Portuguese Immigration Support

Leon is not just another chatbot. It is a legally constrained, human-aware, and rigorously scoped digital assistant that interacts with real-world immigration processes in Portugal. Its purpose is to empower people with accurate, lawful, and respectful information — not to automate decision-making or replace lawyers, but to reduce informational asymmetry and support people navigating systems they often don’t fully understand.

At a time when AI hype often overshadows reliability, Leon was designed differently. Its development began not with generative capabilities, but with a legal and institutional analysis: How do you build something that feels like a right-hand guide — but behaves like a civil servant? How do you offer fast answers without speculation, and structure personalized guidance without overstepping law?

This article shares the design logic behind Leon, offering insight into how its behavior, limitations, and safeguards serve public-interest technology goals.

The Legal Complexity Problem

Portugal’s immigration law is intricate, evolving, and often interpreted differently by region or office. Official portals like AIMA or ePortugal are valuable — but they aren’t always navigable, complete, or translated. Legal support is expensive. Informal advice spreads across Telegram groups, Reddit threads, and YouTube — some helpful, some deeply inaccurate.

Leon exists to solve a problem of trusted translation between policy and people. It does not promise legal outcomes or replace due process. Instead, it interprets verified information for people who may not speak the language, understand the bureaucracy, or have access to help.

"We didn’t build Leon to impress AI people. We built it to help someone sitting in a waiting room with a 2-month-old and a 10-page checklist."

What Leon Actually Does

Leon offers immigration guidance based only on verified, official public sources:

It covers:

It explicitly avoids:

If something isn’t clearly defined in law or confirmed practice, Leon will say so.

Behind the Interface: A Structured Thought Model

Leon operates under a strict hierarchy of logic:

This model is not powered by a one-shot prompt. It's a layered behavioral structure: legal policy at the top, interaction logic underneath, tone management across. Think of it as a constrained, responsive protocol.

Guardrails and Governance

Leon is designed to reduce institutional and user risk:

Leon also maintains internal content filters that detect when a query moves outside immigration scope, nudging users toward professional assistance.

It behaves more like a legal librarian than a personal assistant.

Data Use and Privacy

Leon’s public interface is designed to respect user privacy while allowing for responsible system improvement. As described in our privacy policy and terms of use, anonymized user interactions may be retained for the purposes of:

Leon does not store personal data in dialogue history, and identifiable information is not reused or resold. However, patterns of interaction and aggregated usage insights may be analyzed to inform partnerships, such as:

Leon is not a data extraction product. It is an ethically guided, mission-driven tool built to responsibly evolve while maintaining trust and transparency.

Why It Works — and Why It’s Needed

Leon is part of a larger digital transformation effort: to make law-based systems speak in human terms, across languages and devices. It gives people control over the information they need to participate in civic life — and it does so without gamification, AI-splaining, or emotional manipulation.

It is also a testbed: a real-world experiment in constrained AI design. By refusing to maximize engagement or output, it instead maximizes trust.

“This is not GPT with immigration data. This is a framework for responsible digital infrastructure.”

📩 For partnerships, documentation, or evaluation requests: contact@openlifestreet.com