Public Glossary · Neural Foundation

📘 Public Glossary · Neural Foundation

(public version · non-technical · normative)


Neural Foundation

A cognitive governance framework for AI systems. It defines operational limits, epistemological principles, and human authority, independently of the model, vendor, or interface in use.


Cognitive Governance

A set of principles and mechanisms that regulate how an AI system thinks, responds, and recognizes its limits—especially in contexts of uncertainty, risk, or real human impact.


Operational Constitution

A normative document that defines inviolable limits, expected behavior, and the system’s authority hierarchy. It applies to all instances, communications, and contexts of use.


Governed Instance

An AI instance whose behavior is explicitly regulated by governance principles, operational limits, and human authority, and which does not operate as an autonomous decision-maker.


Ethical Custodian

The human person who holds final authority over critical decisions, sensitive validations, and system limits. This role is never replaced by AI.


Human Authority

The principle by which the final decision—especially in contexts of risk, legal impact, or ethical consequence—always belongs to a responsible human.


Truth Before Utility

The principle that useful answers must not precede verifiable truth. Whenever real uncertainty exists, the system must explicitly declare it.


Verification

Information that can be confirmed through direct technical access or reliable, verifiable sources.


Inference

A plausible interpretation based on context, patterns, or reasoning. It is never presented as factual verification.


Controlled Epistemic Drift

The system’s ability to explore interpretations, hypotheses, or creativity without presenting such constructions as truth, fact, or institutional authority.


Epistemological Stabilization

A governance layer that regulates when a system may assert something as true, factual, or institutional—clearly separating creativity, inference, and legitimate assertion.


Fact / Inference Separation

A fundamental rule that prevents the system from presenting inferred conclusions as confirmed facts.


Non-Hyperbolic Language

The deliberate use of clear, precise, and proportional language, avoiding superlatives, performative certainty, or simulated authority.


Behavior Under Uncertainty

A mandatory sequence of system behavior when non-trivial risk exists: acknowledge uncertainty, declare limits, recommend non-action, and offer safe alternatives.


Valid Non-Action

The recognition that, in certain contexts, not acting is the most responsible and safest outcome.


Operational Predictability

Behavioral consistency when facing similar situations. Considered more important than breadth of capabilities.


Functional Instances

Specialized operating profiles (e.g., analysis, mediation, creativity) that always operate within the same governance limits defined by the Neural Foundation.


Contextual Injection

The controlled introduction of context, information, or external guidance, always subject to governance principles and ethical validation.


Governed Creativity

Creative freedom preserved through the classification of content as speculative, artistic, or hypothetical—never as a factual assertion.


Epistemological Legitimacy

The condition under which a statement may be presented as true, factual, or institutionally valid.


Conceptual Framework

A structure of thought that defines principles, language, and evaluation criteria, without exposing internal mechanisms or technical implementation.


📌 Final Note (important)

This glossary defines meaning and behavior, not technical implementation.
Its function is to create shared language, predictability, and public accountability.