Why Security Teams Gave Up on Insider Risk
Traditional UEBA made insider risk feel expensive, noisy, and brittle. Candor is built on a different belief: context should surface real risk fast.
Insider risk as a problem space has turned many security teams against it.
The reason is not that the risk is hypothetical. It is that traditional UEBA deployments have asked security teams to spend months on data engineering, tuning, and pipeline maintenance, only to end up with false positives, brittle detections, and very little confidence six months later.
That is unfortunate, because the risk has only become more urgent. Nation-state operators, compromised third parties, contractors with sensitive access, and AI-enabled attackers are already operating inside the systems companies depend on. We have heard the same line from security leaders again and again: an insider threat program is a "luxury" the team does not have bandwidth for.
For CISOs who care about the problem, it should not have to be.
Over the past year, we embedded as insider risk investigators at medium and large companies. We sat with the teams doing the work, followed the cases, watched the handoffs stall, and saw where traditional tooling made the job harder than it needed to be.
We built Candor on six convictions.
1. Baselining has failed.
ML-based baselines generate noise and very little actionable output. They are usually framed as proactive, but in practice they often become an extension of static detection rules: another way to describe deviation after the fact.
Peer groups are also harder to define than vendors make them sound. Job functions are fluid. Teams reorganize. People move between projects. Contractors, executives, engineers, finance teams, and AI agents do not behave like clean statistical cohorts.
The core issue is deeper: baselines attempt to model humans and agents as statistical processes. In reality, people act on context, circumstance, access, incentives, and intent. Baselining is an ineffective way to model human behavior when the real question is not "is this unusual?" but "does this story make sense?"
2. Context is everything.
Traditional security indicators from systems like Okta, CrowdStrike, Palo Alto, and DLP are useful. They tell you what happened. But by themselves, they rarely explain why it happened or whether it matters.
The highest-intent signals often come from sources where people express intent in natural language: ServiceNow requests, Slack messages, Glean searches, Jira tickets, AI assistant prompts, support cases, and internal docs.
Those sources have historically been difficult to analyze at scale. But they are exactly where investigators find the context that separates a noisy anomaly from a real concern. Effective insider risk tooling should prioritize high-intent sources, not just the easiest logs to collect.
3. Analysts need to trust alerts.
Security teams are conditioned to assume alerts are false positives. That is especially true for UEBA and human risk detections, because the alerting mechanism is fundamentally flawed.
Insider risk is rarely one isolated anomaly. It is a story.
A file download, a strange login, a Slack message, a sensitive search, and an access request may each look benign in isolation. Together, they can reveal intent, preparation, or compromise. Alerting on individual anomalies misses the point. Analysts need a clear narrative they can trust, not another queue of disconnected events.
4. Security tools need to be easy to use.
The lack of a dedicated insider risk team should not stop a security organization from caring about insider threats.
A generalist or new hire on the security team should be able to open the product and quickly understand the story: who is involved, what happened, why it matters, what evidence supports the concern, and what to do next.
Organizations should not be punished with an interface that requires its own headcount to interpret. Good tooling should make a difficult investigation clearer, not turn it into another specialized operating burden.
5. Investigators need agency.
When triaging DLP alerts or insider risk escalations, analysts often depend on SOC teams for data and IT teams for approvals. That dependency can turn experienced investigators into ticket filers, and it is one of the biggest reasons cases stall.
We understand that access to certain systems, especially HR systems, can be hard. But many high-intent sources are available through simple, standard integrations. You can enable a lot with limited permissions when the product is designed around appropriate access, auditability, and clear workflows.
With the right audit trail and UX, good tooling puts the maximum appropriate investigative surface directly in the analyst's hands.
6. Value needs to come fast.
Traditional tooling often requires months of tuning before the organization sees potential ROI. Insider risk programs do not have that luxury.
Effective tooling should surface real findings as soon as it has useful data, not after a long baselining period. Data quality matters, but it should not become the bottleneck to value. The goal should be the fewest standardized integrations that work, a fast path to context, and findings the team can act on immediately.
The tooling has to match the threat
Insider risk is much broader than an employee emailing files to a personal account.
It includes compromised contractors, nation-state operators embedded as employees, high-risk third parties, misused privileged access, and a rapidly growing population of AI agents acting on behalf of humans inside the environment.
The attack surface is expanding. Security teams need tooling that can identify and remediate these risks without requiring a large dedicated program, months of tuning, or another brittle data pipeline.
When we built Candor, we put context at the core. We leverage existing and novel data sources to surface the riskiest individuals, explain why they matter, and help teams close the hygiene gaps that made the risk possible in the first place.
We are grateful to our design partners for rolling out the future of insider risk with us.
If this resonates with you, we would love to chat.