Marketplace
Discover ready-to-run AI agents and reusable templates.
Data Analysis Agent Template
The Data Analysis Agent Template turns any CSV file into an interactive AI-powered dashboard that runs entirely in your browser. Powered by Pyodide (Python compiled to WebAssembly), it loads pandas and numpy locally — no data ever leaves your machine. Upload your CSV by dragging and dropping it onto the interface. The agent parses it instantly and displays a preview table of the first 10 rows. From there, ask questions in plain English: "show me a summary of the data", "plot a histogram of the sales column", "what are the correlations between numeric variables?", or "show value counts for the region column." Under the hood, the agent uses GPT-4o, Claude 3.5 Sonnet, or a fully local WebLLM model (your choice) to interpret your query and call the correct Python analysis function. It can generate pandas .describe() statistics, detect missing values per column, compute a full Pearson correlation matrix with highlighted strong correlations, and create matplotlib bar charts or histograms rendered as inline images — all without sending your data to any external server. This template is ideal for data analysts, researchers, and developers who need quick exploratory data analysis (EDA) on sensitive datasets, offline environments, or anywhere a traditional server-side tool is not appropriate. The exported HTML file is fully self-contained — open it in any modern browser (Chrome 113+, Edge 113+) with WebGPU support and it works immediately.
CSV Data Cleaner
The CSV Data Cleaner is a fully browser-based AI agent that turns messy, inconsistent spreadsheet data into clean, analysis-ready CSV files — without uploading your data to any external server. Upload your CSV by clicking or dragging and dropping, and the agent immediately runs a Python-based quality audit using pandas-style logic via Pyodide (Python compiled to WebAssembly). The automated issue detection scans every column for four categories of data quality problems: missing values (flagged HIGH if more than 20% of rows are empty, MEDIUM otherwise), duplicate rows (exact matches across all fields), inconsistent date formats (e.g. a column mixing YYYY-MM-DD, MM/DD/YYYY, and DD-MM-YYYY), and whitespace issues (leading or trailing spaces that break joins and lookups). Every issue is displayed with a severity badge before you do anything. From there, you have three ways to clean. Quick actions — Remove Duplicates, Fill Missing Values, Standardize Formats, Clean Whitespace — run with one click and log exactly what changed. AI Suggestions asks the language model to inspect your column names, data types, and detected issues and return 5–8 prioritised cleaning recommendations (e.g. "normalise phone numbers in the contact column", "convert revenue to float and fill nulls with 0"), each with an Apply button. Custom tasks let you describe any transformation in plain English — "split the full_name column into first and last", "replace all blank cells in status with 'pending'", "remove rows where age is over 120" — and the agent generates and executes the Python code in-browser. All cleaning steps are logged with timestamps in the activity panel. When finished, download the result as a clean CSV file. Compatible with GPT-4o, Claude 3.5 Sonnet, or a fully local WebLLM model. No data ever leaves your machine.
Resume Optimizer
The Resume Optimizer is a fully browser-based AI agent that rewrites your resume to beat Applicant Tracking Systems (ATS) and land more interviews. Upload your existing resume as a PDF, DOCX, or plain text file — the agent extracts the text using pypdf and python-docx running via Pyodide (Python compiled to WebAssembly) — then paste the job description you are targeting. The agent analyzes both documents locally and delivers four results simultaneously. First, an ATS compatibility score (0–100) with a breakdown across keyword coverage, formatting compliance, action verb usage, and achievement quantification. Second, a keyword analysis panel showing every required skill and term from the job description as either matched (present in your resume) or missing (absent but needed). Third, a set of issues and recommendations ranked as critical, warning, or informational — covering specific bullet points to rewrite, missing sections, and suggested power verbs for your industry and role level. Fourth, a full optimized resume with rewritten bullet points, added keywords, and ATS-safe formatting applied. The three-panel interface lets you view the original, the optimized version, and a direct side-by-side comparison. When you are satisfied, export the optimized resume as a PDF (via jsPDF), DOCX, or plain text with one click. The template supports seven target industries — Technology, Finance, Healthcare, Marketing, Engineering, Consulting, and Other — and four role levels: Entry, Mid, Senior, and Executive. Because the entire pipeline — PDF text extraction, keyword matching, ATS scoring, and LLM rewriting — runs inside the browser, no resume data is ever uploaded to an external server. Compatible with GPT-4o, Claude 3.5 Sonnet, or a fully local WebLLM model for complete offline privacy.
Confidential Meeting Notes Summarizer
The Confidential Meeting Notes Summarizer is a two-stage, privacy-first AI agent that processes meeting notes or transcripts entirely inside the browser — no data ever leaves your device. What makes it technically unique is the anonymization pipeline that runs before the language model ever sees your text. In stage one, a local BERT-based Named Entity Recognition (NER) model — onnx-community/bert-base-NER-ONNX running via Transformers.js — scans your raw notes and detects persons (PER), organisations (ORG), and locations (LOC) with confidence scores above 0.85. Every detected entity is replaced with a neutral label like [PER-1], [ORG-2], or [LOC-3] before the text is passed anywhere else. An anonymization report shows you exactly what was replaced, what it was replaced with, and the entity type — so the process is fully transparent and auditable. In stage two, four Python tools running via Pyodide perform deterministic pre-analysis on the anonymized text: action item extraction (patterns like "Action:", "TODO:", "will", "needs to", "assigned to"), decision detection ("decided", "agreed", "approved", "we will"), topic scoring across eight categories (budget, timeline, technical, product, HR, sales, strategy, risks), and meeting metadata extraction (word count, estimated duration, transcript vs. bullet format, participant count). These structured results are passed as a context block to a fully local WebLLM language model (Hermes-2-Pro-Mistral-7B via WebGPU), which generates a four-section prose output: Summary, Decisions, Key Discussions, and Tone & Dynamics. Because the entire pipeline — BERT NER, Python analysis, and LLM inference — runs locally in the browser, this template is suitable for legal proceedings, HR interviews, board meetings, client calls, medical consultations, or any meeting where confidentiality is non-negotiable. No API key required. Works completely offline after initial model download.
Choose how to run this agent
Requires an API key and an AgentOp account.
Download Agent
Choose how you want to use this agent:
Use Security Settings Key (Recommended)
Use the API key you've already saved in Security Settings. Quick and convenient!
- No need to re-enter API key
- Works offline after download
- Centralized key management
No API key found in Security Settings. Add one now
Enter API Key Manually
Enter your API key now for this specific agent download.
- Use different key for this agent
- One-time use (not saved)
- Works offline after download