Every feature solves a specific problem we saw destroying crypto communities. Here's the thinking behind each one.
Static bots are dead-ends. You upload docs once. Bot answers from those docs forever. New products launch? Bot doesn't know. FAQ changes? Bot gives stale answers. Users ask questions the bot can't answer? They escalate to humans.
The result: founders become human FAQ machines. Every knowledge gap funnels to the same 2-3 people. They become bottlenecks. They burn out.
Adaptive agents that curate communication. The system learns from every interaction:
The feedback loop that eliminates bottlenecks:
Static vs Adaptive:
| Dimension | Static Bot | Adaptive Agent |
|---|---|---|
| Knowledge updates | Redeploy required | Real-time, no code |
| Unknown questions | Escalate to human | Log gap, dead-end cleanly |
| Learning | Never improves | Every interaction teaches |
| Founder load | Constant FAQ duty | Add knowledge once, done |
Real example from today: Users asked about Foundry token fee mechanics. Bot didn't know. Gap logged. Admin added the answer. Now anyone asking "Do OREBIT fees go to VISTA stakers?" gets: "No. Each Foundry token's fees go only to that token's stakers." Question answered. No human escalation. Loop closed.
This is the core philosophy behind every feature below. We don't build static tools. We build systems that get smarter as your community uses them.
Traditional point systems reward quantity over quality. Users spam "gm" and copy-paste questions to farm points. Real contributors get drowned out by noise.
Manual moderation doesn't scale. Moderators burn out reviewing the same low-effort content. Communities become ghost towns of farmers waiting for airdrops.
AI scores every question on three dimensions:
Result: 1-10 points per question. Farmers get 1-2 pts for spam. Curious users get 7-10 pts for genuine questions.
Semantic similarity, not keyword matching. Farmers would game keyword rules instantly. By embedding questions and comparing vectors, we detect rephrased duplicates that humans would miss.
Cooperation metrics. We track if a question leads to follow-up discussion. Questions that spark conversation get retroactive score boosts. This incentivizes questions that actually help the community.
| Approach | Traditional | Bonzi |
|---|---|---|
| Scoring method | 1 msg = 1 pt | AI-scored 1-10 pts |
| Duplicate detection | None | Semantic similarity |
| Gaming resistance | Easily farmed | Cooperation metrics |
Airdrops get destroyed by sybils. One person runs 50 wallets, farms 50x the allocation, dumps on real users. Projects lose credibility. Real contributors get diluted.
Current solutions: manual wallet review (doesn't scale), on-chain analysis (catches some, misses many), trust scores (easily gamed with time).
Behavioral analysis across multiple dimensions:
Behavioral over identity. Wallet addresses are free. New accounts are free. The only thing farmers can't fake is genuine engagement over time. We measure cooperation, not credentials.
Cross-tenant sharing. A farmer might be patient in one community but exposed in another. By sharing behavioral patterns (not personal data), we catch sophisticated actors faster.
Pre-snapshot flagging. Most sybil detection happens after the airdrop, when it's too late. We flag suspicious accounts in real-time so projects can act before the snapshot.
Community moderators are human FAQ machines. The same questions get asked daily. Mods burn out. Response times suffer. Unanswered questions pile up.
When mods go offline, spam floods in. When mods are online, they're answering "wen token" for the 500th time instead of building community.
The escalation trap: Bots say "I don't know, ask a human." Users DM founders. Founders become the bottleneck.
AI that answers or dead-ends. Never escalates for answerable questions.
Definitive answers kill question loops. When a bot says "I don't have info on that," users escalate to humans. When a bot says "No. Here's why: [facts]" - the question dies. No escalation needed.
Pattern matching before LLM. Known questions bypass the LLM entirely. This is faster, cheaper, and more reliable. LLM is fallback for novel questions.
No "check docs" cop-outs. If the answer is in the docs, the bot should give the answer, not redirect. Users don't want to read docs. They want answers.
The key insight: Soft answers ("I'm not sure") create spam. Hard answers ("No, and here's why") end conversations. We optimize for ending conversations correctly, not for appearing helpful while being useless.
| Scenario | Traditional Bot | Bonzi |
|---|---|---|
| Unknown question | "Ask a moderator" | "Not documented. [link]" |
| FAQ question | LLM generation (slow, variable) | Pattern match (instant, consistent) |
| Spam/FUD | Manual deletion | Auto-blocked + rebutted |
Raid campaigns are chaos. Users submit screenshots of their tweets. Mods manually verify each one: Does the tweet exist? Is it original? Is the account real?
Farmers submit fake screenshots, deleted tweets, copy-pasted content. Mods spend hours on verification instead of strategy.
Verify the source, not the screenshot. Screenshots can be faked. Tweet URLs can't. We always verify against the actual tweet.
Originality over quantity. 100 copy-paste raids are worth less than 10 original ones. We score for creativity and genuine engagement.
Tiered verification. Trusted users get fast-tracked. New users get extra scrutiny. This balances speed with security.
Helpful community members go unrecognized. Someone answers questions all day, gets nothing. Someone posts one meme, goes viral. Recognition is random.
React to any helpful post with a designated emoji. The author gets +1 point. Simple, visible, trackable.
Self-host free, or let us run it for you.
Try the Demo