| Airweave | A fully open-source context retrieval layer for AI agents. It connects to various apps, databases, and document stores, transforming their contents into a searchable knowledge base for AI consumption via a semantic search interface. | AI/Data Context |
| Atlassian | A secure bridge (like the Rovo MCP Server) that allows AI agents to read and interact with Jira, Confluence, and Compass data in real-time. It enables summarization, search, and ticket/page creation using natural language. | Project Management/DevOps |
| CodeMind | Not a specific tool, but often referenced in discussions about using Code Execution with MCP to make agents more efficient. The concept leverages the agent's ability to write code to interact with tools as APIs, filtering data and executing complex logic. | AI Framework/Execution |
| Dart | A server that exposes the Dart SDK commands to AI coding assistants. This allows the agent to perform code analysis, fix lints, format code, run tests, and manage packages for Dart and Flutter projects. | Programming Language/CLI |
| Figma Dev Mode MCP | A beta feature that surfaces rich design context from Figma (components, styles, variables, screenshots) into AI coding tools. This helps the AI generate code that is more consistent with the established design system. | Design-to-Code |
| GitHub | (General MCP context) Connects AI agents to GitHub repositories, issues, and pull requests to enable automated actions, such as analyzing code, suggesting fixes, summarizing PRs, and interacting with project boards. | Version Control/DevOps |
| Harness | Provides integration with Harness CI/CD, Chaos Engineering, and other platform tools. It allows agents to discover, run, monitor, and analyze pipelines and chaos experiments through natural language prompts. | DevOps/SRE |
| Heroku | Enables AI agents to interact with and manage apps hosted on the Heroku platform. It provides tools for deploying, managing configurations, scaling dynos, and defining agentic workflows within the Heroku ecosystem. | Cloud Platform/PaaS |
| Linear | Provides comprehensive project and issue management capabilities. The server exposes tools for listing, creating, updating, and summarizing Linear issues, projects, and team coordination directly through conversation. | Project Management |
| Locofy | Allows AI agents (specifically in AI-first IDEs) to retrieve, integrate, and extend code generated by the Locofy.ai platform. It helps sync Locofy-generated components and dependencies into a local codebase. | Design-to-Code |
| MongoDB | (General MCP context) Connects AI agents to MongoDB databases to perform data queries, structural analysis, and administration tasks using natural language instead of complex query language. | Database |
| Neon | A server that allows AI agents to interact with Neon Postgres databases using natural language commands, enabling project/branch management, SQL execution, database migrations, and query optimization. | Serverless Database |
| Netlify | (General MCP context) Connects AI agents to the Netlify platform to manage deployments, access logs, configure build settings, and perform platform actions via conversational prompts. | Web Hosting/DevOps |
| Notion | (General MCP context) Integrates AI agents with Notion databases and pages to search, summarize, create, and update documents, meeting notes, and knowledge bases using conversational commands. | Productivity/Knowledge Base |
| PayPal | (General MCP context) Typically exposes e-commerce and payment processing capabilities to AI agents, allowing them to integrate payment flows, manage transactions, or query account status. | Payment Processing |
| Perplexity Ask | Integrates the Perplexity search and research engine into AI agents. This allows the agent to perform deep, contextual research and quickly retrieve summarized, cited answers to complex queries. | Search/Research |
| Pinecone | Connects AI agents to a Pinecone vector database. This allows the agent to perform semantic searches, manage vector data, and retrieve highly relevant context for RAG (Retrieval-Augmented Generation) applications. | Vector Database |
| Prisma | (General MCP context) Exposes capabilities for managing database schemas and interacting with the ORM (Object-Relational Mapper), allowing the AI to query databases, perform migrations, and model data. | ORM/Database |
| Redis | (General MCP context) Allows AI agents to interact with Redis data structures (key-value store, cache, message broker) for tasks like accessing real-time data, managing caching, or queuing jobs. | Caching/Database |
| Sequential Thinking | A utility server that provides a structured dynamic and reflective problem-solving tool for the AI. It forces the agent to break down complex problems into manageable, revisable, and trackable thought steps. | AI Framework/Logic |
| SonarQube | (General MCP context) Integrates with the SonarQube platform for code quality and security. This allows AI agents to analyze code, retrieve vulnerability reports, and suggest code fixes based on SonarQube's findings. | Code Quality/Security |
| Stripe | (General MCP context) Exposes payment processing and subscription management tools to AI agents, allowing them to create customers, manage invoices, or check billing status. | Payment Processing |
| Supabase | (General MCP context) Connects AI agents to the Supabase platform (Postgres, Auth, Storage), allowing them to manage database tables, run queries, and interact with backend services. | Serverless Backend |