FAQ Center

Frequently Asked Questions

Browse verified answers on SAFE AI governance, Solace, provider workflows, beta operations, and partnerships.

General FAQ

Core questions about SAFE AI and the mission model.

What is SAFE AI Trust?

SAFE AI Trust is a trauma-informed AI infrastructure initiative designed to protect vulnerable populations through accountable and auditable systems.

Why a trust and not a company?

The trust structure protects long-term mission commitments around survivor safety, ethical governance, and harm reduction.

What problem is SAFE AI solving?

SAFE AI addresses fragmented survivor support, referral breakdowns, confidentiality risks, and retraumatization caused by repetitive storytelling.

What makes SAFE AI different from other AI platforms?

SAFE AI is trauma-informed by design, human-in-the-loop, audit-traceable, and built from African context rather than imported assumptions.

Does SAFE AI sell user data?

No. SAFE AI does not sell, trade, or monetize personal user data.

Is SAFE AI just a chatbot?

No. SAFE AI is infrastructure for safe triage, routing, governance, and service coordination.

Solace FAQ

Questions about survivor-facing support in Solace.

What is Solace?

Solace is a trauma-informed digital harm-prevention and response system that connects survivors to verified support providers.

Is Solace confidential?

Yes. Solace uses data minimization, controlled access, transparency audits, and secure authentication.

Do I have to share everything about my experience?

No. Solace is designed to reduce retraumatization by collecting only relevant information and avoiding unnecessary repetition.

Who can see my information?

Access is limited to the user, matched providers, and authorized administrators under audited controls.

What happens if a provider behaves unethically?

Reported misconduct can trigger suspension, review, and escalation procedures to protect survivors.

Is Solace free?

During beta, survivor access is free. Future models are intended to minimize survivor paywalls.

SolaceConnect FAQ

Questions about provider verification and operational governance.

What is SolaceConnect?

SolaceConnect is SAFE AI's provider verification and governance portal for referral integrity and accountability.

How are providers vetted?

Vetting includes credentials verification, licensing checks, ethical review, risk scoring, and ongoing monitoring.

Can a provider lose access?

Yes. Access can be suspended, restricted, or revoked when safety and compliance standards are breached.

Why monitor provider interactions?

Monitoring supports timeliness, ethical conduct, documentation quality, and abuse prevention.

Does SolaceConnect replace hospital or NGO systems?

No. It integrates with local workflows to improve coordination and transparency.

AI, Ethics and Governance FAQ

How SAFE AI manages model responsibility and oversight.

Is AI making final decisions about survivors?

No. AI supports triage and recommendations, while humans remain accountable for outcomes.

What does trauma-informed AI mean?

It means language, flow, and escalation behavior are designed to reduce triggering interactions and cognitive burden.

What is federated architecture in SAFE AI?

Federated architecture minimizes centralization so institutions keep control of their own data while coordination remains secure.

How do you prevent bias in AI systems?

SAFE AI uses recurring audits, demographic checks, local-context testing, and human override controls.

Are you aligned with data protection requirements?

Yes. SAFE AI aligns with the Kenyan Data Protection Act and privacy-preserving governance practices.

Beta Launch FAQ

What beta means for reliability, feedback, and security.

What does beta mean?

Core workflows are live and functional while performance and UX are continuously improved through real feedback.

Will there be bugs?

Possibly. Beta helps identify edge cases, refine flows, and optimize behavior under load.

How can I report a problem?

Users can submit in-app feedback or contact SAFE AI directly at safeai@adanianlabs.io.

Is my data safe during beta?

Yes. Role-based access, logging, vetting controls, and authentication safeguards remain active during beta.

Institutional and Partnership FAQ

How organizations integrate with SAFE AI systems.

Can hospitals or NGOs partner with SAFE AI?

Yes. SAFE AI works with hospitals, crisis centers, legal aid providers, and psychosocial support organizations.

Does SAFE AI replace government systems?

No. SAFE AI is designed to complement and strengthen existing public infrastructure.

Can SAFE AI expand beyond GBV?

Yes. The architecture can support child protection, refugee support, disaster response, and public health triage.

Vision FAQ

Long-term direction of the SAFE AI initiative.

Why focus on Africa first?

SAFE AI is built from local realities because trauma-informed systems must reflect lived context and not imported assumptions.

What is the long-term goal?

To build a federated trust layer, ethical governance standards, and cross-border protection coordination rooted in African leadership.