The explainer video is here:
I would appreciate any feedback. Thank you!
The idea: AI agents give generic startup advice. This gives them access to what founders actually did, with verbatim quotes and timestamp links to the source.
Stack: Deepgram + Claude + SQLite + FastAPI. Total cost under €50.
Calens fills that gap: GitHub-style heatmap showing 52 weeks of calendar activity, weekly/monthly time breakdowns by calendar or tag, a progress chart of planned vs completed time, and a cleaner in-page event editor. Everything runs on-device — no servers, no tracking, no data leaving the browser.
Early-stage, looking for people who already log their life in Google Calendar and want better data on their habits. Happy to give free lifetime access in exchange for honest feedback.
I've been scanning all 14,704 skills in the registry and running AI deep audits on ~3,800 so far. The headline finding: surface heuristics (pattern matching, dependency checks, metadata) flag about 6.6% as malicious. AI deep audit of the same skills finds 16.4%. Surface scanning misses roughly 60% of the actual risk.
The reason is that these skills aren't traditional packages — they're markdown instruction files that tell an AI agent what to do, with full shell, file system, and network access. The attacks are in natural language: prompt injection, social engineering targeting the AI itself, instructions to generate and execute code at runtime. There's no malicious code to detect because the payload doesn't exist until the AI writes it during a conversation.
Some of the attack patterns I've documented: one actor published 30 skills under the name "x-trends" across multiple accounts (28/30 confirmed malicious). Another cluster impersonates ClawHub's own CLI with base64 curl|bash payloads. One skill has a "Talking to Your Human" section with a pre-written pitch for the AI to ask the user's permission to mine Monero.
The most counterintuitive case: lekt9/foundry contains zero malicious code. It instructs your AI agent to generate and execute code as part of its normal workflow. Static analysis finds nothing because the dangerous code doesn't exist until the AI writes it during a live conversation. This attack class requires AI to detect AI.
Free to check any skill. All AI audit reports are public.