Tech Souls, Connected.

The ‘Signal for AI’: CONFSEC Promises Encrypted Prompts with Zero Leakage

Dubbed the “Signal for AI,” CONFSEC promises encrypted AI access without compromising sensitive data


A New Layer of Trust for AI

As AI adoption explodes across sectors, a growing concern looms: data privacy. From governments to banks to startups, many organizations are wary of deploying AI tools that might store or train on sensitive inputs—even when those tools come from trusted names like OpenAI or Google.

  • Sectors like healthcare, finance, and defense have been particularly slow to adopt AI due to regulatory constraints and risk exposure.
  • The blurred lines around how and where data is stored or used by large AI vendors have become a dealbreaker for high-compliance industries.

Enter Confident Security: AI Without Compromise

Confident Security, a San Francisco-based startup, has just launched from stealth with a mission to eliminate the privacy trade-off inherent in today’s AI services.

  • Its product, CONFSEC, acts as a privacy layer around foundational AI models, using end-to-end encryption to ensure no data or metadata is ever visible—not to the provider, not to the cloud, not to hackers.
  • The system makes it technically impossible for AI vendors to store, train on, or leak input data, no matter where it’s processed.

A “Signal for AI” Model

Founder and CEO Jonathan Mortensen compares CONFSEC to Signal, the encrypted messaging app, but for AI use cases.

  • “The second you give up your data to someone else, you’ve essentially reduced your privacy,” said Mortensen.
  • CONFSEC removes that trade-off, allowing companies to harness AI securely without fearing unintended data exposure or exploitation.

How CONFSEC Works

CONFSEC is modeled on Apple’s Private Cloud Compute (PCC) architecture and combines multiple privacy-by-design mechanisms:

  • Anonymized Routing: Encrypts and routes data through Cloudflare or Fastly, ensuring servers never see original inputs.
  • Conditional Decryption: Data can only be decrypted under strict, auditable rules—such as no logging, no training, and no third-party access.
  • Transparency by Design: AI inference code is open-source and externally auditable, allowing independent verification of privacy claims.

Backed by Top Investors

Confident Security secured $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx.

  • Decibel partner Jess Leão said the company is “ahead of the curve,” offering the infrastructure-level trust needed for enterprises to move forward with AI.
  • CONFSEC has already been externally audited and is production-ready, with talks underway with banks, browsers, and search engines for integration.

Unlocking the Enterprise AI Market

Even AI vendors stand to benefit from integrating CONFSEC.

  • By embedding privacy guarantees, they can unlock enterprise clients who otherwise refuse to use AI tools due to compliance or IP concerns.
  • CONFSEC could also be adopted by emerging AI-first browsers like Perplexity’s Comet, giving users assurance their queries aren’t stored or used to “train AI to do your job,” as Mortensen puts it.

Looking Ahead

Confident Security is less than a year old, but it’s entering the market with a fully tested, auditable, and scalable solution.

  • Its privacy-first architecture may prove pivotal in enabling AI adoption in industries where data sensitivity is non-negotiable.
  • As Mortensen sums up: “You bring the AI, we bring the privacy.
Share this article
Shareable URL
Prev Post

Spotify Doubles Audiobook Access and Shares It with Your Household

Next Post

$2.17B Gone in Six Months—And North Korea Is Behind Most of It

Read next