Engineering Deep Dive

Why we built each feature the way we did.

Every feature solves a specific problem we saw destroying crypto communities. Here's the thinking behind each one.

Adaptive Agents Live

Not static bots. Learning systems that evolve with your community.
The Problem

Static bots are dead-ends. You upload docs once. Bot answers from those docs forever. New products launch? Bot doesn't know. FAQ changes? Bot gives stale answers. Users ask questions the bot can't answer? They escalate to humans.

The result: founders become human FAQ machines. Every knowledge gap funnels to the same 2-3 people. They become bottlenecks. They burn out.

Our Solution

Adaptive agents that curate communication. The system learns from every interaction:

  • Gap Detection - When the bot can't answer, that gap is logged. Admins see what users are asking that isn't documented.
  • Real-time Updates - Admins add knowledge. The very next user asking that question gets the answer. No redeploy needed.
  • Pattern Recognition - Same question asked 10 times? System surfaces it for FAQ promotion. One-off edge cases stay in RAG.
  • Closed-Loop Learning - User feedback, admin corrections, and engagement signals all feed back into the system.
Why This Changes Everything

The feedback loop that eliminates bottlenecks:

User asks question │ ▼ ┌─────────────────────────────────────┐ │ Bot answers OR logs gap │ │ Either way: human not needed │ └─────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────┐ │ Gap Dashboard │ │ Admin sees: "12 users asked about │ │ OREBIT fees, no answer exists" │ └─────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────┐ │ Admin adds knowledge (30 seconds) │ │ "OREBIT fees go to OREBIT stakers" │ └─────────────────────────────────────┘ │ ▼ ✅ Next user gets instant answer No code change. No redeploy.

Static vs Adaptive:

Dimension Static Bot Adaptive Agent
Knowledge updates Redeploy required Real-time, no code
Unknown questions Escalate to human Log gap, dead-end cleanly
Learning Never improves Every interaction teaches
Founder load Constant FAQ duty Add knowledge once, done

Real example from today: Users asked about Foundry token fee mechanics. Bot didn't know. Gap logged. Admin added the answer. Now anyone asking "Do OREBIT fees go to VISTA stakers?" gets: "No. Each Foundry token's fees go only to that token's stakers." Question answered. No human escalation. Loop closed.

This is the core philosophy behind every feature below. We don't build static tools. We build systems that get smarter as your community uses them.

Ask & Earn Live

AI-scored questions. Curiosity beats farming.
The Problem

Traditional point systems reward quantity over quality. Users spam "gm" and copy-paste questions to farm points. Real contributors get drowned out by noise.

Manual moderation doesn't scale. Moderators burn out reviewing the same low-effort content. Communities become ghost towns of farmers waiting for airdrops.

Our Solution

AI scores every question on three dimensions:

  • Depth - Does it require thought to answer? Single-word answers = low score.
  • Novelty - Has this been asked before? Semantic similarity check against history.
  • Cooperation - Does it help others learn? Questions that benefit the community score higher.

Result: 1-10 points per question. Farmers get 1-2 pts for spam. Curious users get 7-10 pts for genuine questions.

Why We Engineered It This Way

Semantic similarity, not keyword matching. Farmers would game keyword rules instantly. By embedding questions and comparing vectors, we detect rephrased duplicates that humans would miss.

Cooperation metrics. We track if a question leads to follow-up discussion. Questions that spark conversation get retroactive score boosts. This incentivizes questions that actually help the community.

# Simplified scoring logic score = 0 score += depth_score(question) # 0-4 pts score += novelty_score(question) # 0-3 pts (penalize duplicates) score += cooperation_score(thread) # 0-3 pts (retroactive) # Anti-gaming: score decays if user's questions never get answered # This catches users asking fake "deep" questions to farm if user.unanswered_ratio > 0.8: score *= 0.5
Approach Traditional Bonzi
Scoring method 1 msg = 1 pt AI-scored 1-10 pts
Duplicate detection None Semantic similarity
Gaming resistance Easily farmed Cooperation metrics

Sybil Detection Live

Flag farmers before the snapshot.
The Problem

Airdrops get destroyed by sybils. One person runs 50 wallets, farms 50x the allocation, dumps on real users. Projects lose credibility. Real contributors get diluted.

Current solutions: manual wallet review (doesn't scale), on-chain analysis (catches some, misses many), trust scores (easily gamed with time).

Our Solution

Behavioral analysis across multiple dimensions:

  • AI Fingerprinting - Detects bot-generated content patterns. ChatGPT summaries, template responses, unnaturally perfect grammar.
  • Temporal Clustering - Accounts that join within seconds, post at same times, exhibit coordinated behavior.
  • Cross-Tenant Intel - Patterns seen in one community shared (anonymized) across the network. Farmer caught once = flagged everywhere.
  • Extractive Behavior - Accounts that only take (claim rewards) but never give (help others). Cooperation ratio near zero.
Why We Engineered It This Way

Behavioral over identity. Wallet addresses are free. New accounts are free. The only thing farmers can't fake is genuine engagement over time. We measure cooperation, not credentials.

Cross-tenant sharing. A farmer might be patient in one community but exposed in another. By sharing behavioral patterns (not personal data), we catch sophisticated actors faster.

Pre-snapshot flagging. Most sybil detection happens after the airdrop, when it's too late. We flag suspicious accounts in real-time so projects can act before the snapshot.

User Activity │ ▼ ┌─────────────────────────────────────┐ │ AI Fingerprint Check │ │ Is content bot-generated? │ └─────────────────────────────────────┘ │ Pass ▼ ┌─────────────────────────────────────┐ │ Temporal Analysis │ │ Coordinated with other accounts? │ └─────────────────────────────────────┘ │ Pass ▼ ┌─────────────────────────────────────┐ │ Cooperation Score │ │ Helps others or just extracts? │ └─────────────────────────────────────┘ │ Pass ▼ ✅ Legitimate User

Auto-Moderation Live

Questions answered. Spam blocked. You're off the hook.
The Problem

Community moderators are human FAQ machines. The same questions get asked daily. Mods burn out. Response times suffer. Unanswered questions pile up.

When mods go offline, spam floods in. When mods are online, they're answering "wen token" for the 500th time instead of building community.

The escalation trap: Bots say "I don't know, ask a human." Users DM founders. Founders become the bottleneck.

Our Solution

AI that answers or dead-ends. Never escalates for answerable questions.

  • Authoritative FAQ - Known questions get definitive answers, not hedged "I'm not sure" responses.
  • RAG Knowledge Base - Upload docs, bot answers from them with citations.
  • Spam Blocking - AI fingerprint detection blocks bot spam before it posts.
  • FUD Rebuttals - Detects complaint patterns, responds with facts and links.
  • Silence Detection - Flags when chat goes dead, surfaces engagement metrics.
Why We Engineered It This Way

Definitive answers kill question loops. When a bot says "I don't have info on that," users escalate to humans. When a bot says "No. Here's why: [facts]" - the question dies. No escalation needed.

Pattern matching before LLM. Known questions bypass the LLM entirely. This is faster, cheaper, and more reliable. LLM is fallback for novel questions.

No "check docs" cop-outs. If the answer is in the docs, the bot should give the answer, not redirect. Users don't want to read docs. They want answers.

User Question │ ▼ ┌─────────────────────────────────────┐ │ FAQ Pattern Match │ │ Known question? → Definitive answer│ └─────────────────────────────────────┘ │ No match ▼ ┌─────────────────────────────────────┐ │ RAG Retrieval │ │ In knowledge base? → Cite source │ └─────────────────────────────────────┘ │ No match ▼ ┌─────────────────────────────────────┐ │ LLM Generation │ │ Generate with anti-hallucination │ └─────────────────────────────────────┘ │ Low confidence ▼ "Not documented. [link to official source]" │ ▼ STOP (no escalation to humans)

The key insight: Soft answers ("I'm not sure") create spam. Hard answers ("No, and here's why") end conversations. We optimize for ending conversations correctly, not for appearing helpful while being useless.

Scenario Traditional Bot Bonzi
Unknown question "Ask a moderator" "Not documented. [link]"
FAQ question LLM generation (slow, variable) Pattern match (instant, consistent)
Spam/FUD Manual deletion Auto-blocked + rebutted

Raid & Earn Beta

Auto-verified engagement. No manual review hell.
The Problem

Raid campaigns are chaos. Users submit screenshots of their tweets. Mods manually verify each one: Does the tweet exist? Is it original? Is the account real?

Farmers submit fake screenshots, deleted tweets, copy-pasted content. Mods spend hours on verification instead of strategy.

Our Solution
  • Tweet Existence Check - API verification that the tweet URL actually exists and hasn't been deleted.
  • Originality Analysis - Compare tweet content against known templates and other submissions. Flag copy-paste.
  • Account Quality - Check account age, follower count, engagement patterns. Bot accounts get flagged.
  • Automatic Scoring - Good raids get approved instantly. Suspicious ones queued for review.
Why We Engineered It This Way

Verify the source, not the screenshot. Screenshots can be faked. Tweet URLs can't. We always verify against the actual tweet.

Originality over quantity. 100 copy-paste raids are worth less than 10 original ones. We score for creativity and genuine engagement.

Tiered verification. Trusted users get fast-tracked. New users get extra scrutiny. This balances speed with security.

Thanks System Live

Peer recognition that counts.
The Problem

Helpful community members go unrecognized. Someone answers questions all day, gets nothing. Someone posts one meme, goes viral. Recognition is random.

Our Solution

React to any helpful post with a designated emoji. The author gets +1 point. Simple, visible, trackable.

  • Peer-to-peer - Recognition comes from community, not mods.
  • On-chain ready - Points tracked in database, exportable for airdrops.
  • Anti-gaming - Can't thank yourself. Rate-limited per user. Patterns detected.

Ready to automate your community?

Self-host free, or let us run it for you.

Try the Demo