Compare commits

...

32 Commits

Author SHA1 Message Date
2a9c5780c3 fix: gitea mcp issues
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 4s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m41s
Monorepo Pipeline / 🧹 Lint (push) Successful in 5m17s
Monorepo Pipeline / 🏗️ Build (push) Successful in 4m16s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 5s
2026-03-12 13:33:10 +01:00
fbfc9feab0 fix(next-config): add serverActions.allowedOrigins to support branch deployments
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 25s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m28s
Monorepo Pipeline / 🧹 Lint (push) Successful in 4m22s
Monorepo Pipeline / 🏗️ Build (push) Successful in 4m3s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 4s
2026-03-10 23:29:12 +01:00
8486261555 fix(mcp): refactor all mcp servers to use multi-session sse transport
All checks were successful
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 1s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m1s
Monorepo Pipeline / 🏗️ Build (push) Successful in 2m44s
Monorepo Pipeline / 🧹 Lint (push) Successful in 2m55s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-10 13:32:16 +01:00
5e1f2669e6 feat(kabelfachmann-mcp): add local Ollama support for KABELFACHMANN_LLM_PROVIDER
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m4s
Monorepo Pipeline / 🏗️ Build (push) Successful in 2m53s
Monorepo Pipeline / 🧹 Lint (push) Successful in 3m2s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 4s
2026-03-08 14:01:37 +01:00
541f1c17b7 feat(mcps): add kabelfachmann MCP with Kabelhandbuch integration and remove legacy PM2 orchestration
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m6s
Monorepo Pipeline / 🏗️ Build (push) Successful in 2m52s
Monorepo Pipeline / 🧹 Lint (push) Successful in 3m1s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 4s
2026-03-08 01:01:43 +01:00
048fafa3db fix(payload-ai): remove phantom SCSS import and disable dynamic provider injection
All checks were successful
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 1s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m3s
Monorepo Pipeline / 🏗️ Build (push) Successful in 3m34s
Monorepo Pipeline / 🧹 Lint (push) Successful in 3m56s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
- Remove non-existent ChatWindow.scss import causing Webpack resolution errors
- Disable dynamic ChatWindowProvider injection (now statically declared in host app)
- Revert build script to tsc (no SCSS copy needed)
2026-03-07 11:46:41 +01:00
77e2c4f9b6 ci: remove unused zod dependencies across mcp servers to satisfy depcheck validation
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m21s
Monorepo Pipeline / 🏗️ Build (push) Successful in 3m48s
Monorepo Pipeline / 🧹 Lint (push) Successful in 3m59s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 5s
2026-03-06 17:56:14 +01:00
4eb1aaf640 ci: fix strict TS overloaded parameter matching inside useChat and payload tool bindings
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m14s
Monorepo Pipeline / 🧹 Lint (push) Failing after 2m4s
Monorepo Pipeline / 🏗️ Build (push) Successful in 2m39s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-06 16:10:50 +01:00
61f2f83e0c ci: fix unhandled typescript exceptions and strict eslint errors caught by the pipeline
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧹 Lint (push) Failing after 9s
Monorepo Pipeline / 🏗️ Build (push) Failing after 9s
Monorepo Pipeline / 🧪 Test (push) Failing after 9s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-06 15:49:45 +01:00
2dc61e4937 chore: finalize containerized MCP setup and fix memory-mcp native module issues
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧹 Lint (push) Failing after 9s
Monorepo Pipeline / 🧪 Test (push) Failing after 9s
Monorepo Pipeline / 🏗️ Build (push) Failing after 8s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-06 15:20:52 +01:00
a6ca876823 chore: release v1.9.17
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧹 Lint (push) Failing after 11s
Monorepo Pipeline / 🧪 Test (push) Failing after 9s
Monorepo Pipeline / 🏗️ Build (push) Failing after 9s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 4s
2026-03-06 01:32:12 +01:00
f615565323 fix(glitchtip-mcp): remove unused zod dependency blocking CI release 2026-03-06 01:32:11 +01:00
fcbf388ef8 chore: release v1.9.16
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 5s
Monorepo Pipeline / 🧹 Lint (push) Failing after 3m10s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m43s
Monorepo Pipeline / 🏗️ Build (push) Successful in 3m42s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-06 00:55:54 +01:00
cbed10052b feat(payload-ai): implement context-aware Payload CMS Agent
- ChatWindow now gathers page context (URL, collectionSlug, document ID)
- chatEndpoint fetches real AIChatPermissions from database
- Agent uses OpenRouter Gemini 3 Flash with maxSteps: 10 for autonomous multi-step tool execution
- Fallback default collections when no permissions configured
2026-03-06 00:55:39 +01:00
560213680c fix(infra): add missing Gitea cleanup commands to maintenance script
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 6s
Monorepo Pipeline / 🧪 Test (push) Successful in 2m34s
Monorepo Pipeline / 🧹 Lint (push) Failing after 3m46s
Monorepo Pipeline / 🏗️ Build (push) Successful in 5m58s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-05 23:23:09 +01:00
7e2542bf1f fix(infra): update volume ID for registry pruning
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Failing after 2s
Monorepo Pipeline / 🧹 Lint (push) Has been skipped
Monorepo Pipeline / 🧪 Test (push) Has been skipped
Monorepo Pipeline / 🏗️ Build (push) Has been skipped
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-05 23:08:29 +01:00
df6bef7345 feat(klz-payload-mcp): revert to production URL for CMS operations 2026-03-05 21:51:46 +01:00
aa57e8c48b feat(klz-payload-mcp): implement JWT authentication for robust CMS updates 2026-03-05 17:55:59 +01:00
822e8a9d0f feat(mcps): add full CRUD capabilities to klz-payload-mcp 2026-03-05 12:53:47 +01:00
f0d1fb6647 feat(mcps): add mutation tools for pages and posts to klz-payload-mcp 2026-03-05 12:50:54 +01:00
751ffd59a0 feat(mcps): add pages and posts functions to klz-payload-mcp 2026-03-05 12:47:24 +01:00
d0a17a8a31 feat(mcps): add klz-payload-mcp on port 3006 for customer data 2026-03-05 12:42:20 +01:00
daa2750f89 feat(mcps): unify SSE/Stdio transport and fix handshake timeouts 2026-03-05 12:04:19 +01:00
29423123b3 feat(mcps): add glitchtip-mcp on port 3005 2026-03-05 11:16:23 +01:00
5c10eb0009 feat(mcps): add wiki/packages/releases/projects to gitea + new umami & serpbear MCPs 2026-03-05 10:52:05 +01:00
dca35a9900 chore: update pnpm lockfile for gitea-mcp new dependencies
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧪 Test (push) Successful in 1m19s
Monorepo Pipeline / 🧹 Lint (push) Successful in 4m2s
Monorepo Pipeline / 🏗️ Build (push) Successful in 3m40s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
🏥 Server Maintenance / 🧹 Prune & Clean (push) Failing after 18s
2026-03-04 15:34:45 +01:00
4430d473cb feat(mcps): enhance Gitea MCP with new tools and fix Memory MCP stdio execution
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 8s
Monorepo Pipeline / 🧹 Lint (push) Failing after 12s
Monorepo Pipeline / 🧪 Test (push) Failing after 13s
Monorepo Pipeline / 🏗️ Build (push) Failing after 13s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-04 15:20:15 +01:00
0c27e3b5d8 fix(ci): implement robust gitea registry auth token discovery to replace docker/login-action
Some checks failed
Monorepo Pipeline / ⚡ Prioritize Release (push) Successful in 2s
Monorepo Pipeline / 🧹 Lint (push) Failing after 10s
Monorepo Pipeline / 🧪 Test (push) Failing after 10s
Monorepo Pipeline / 🏗️ Build (push) Failing after 10s
Monorepo Pipeline / 🚀 Release (push) Has been skipped
Monorepo Pipeline / 🐳 Build Gatekeeper (Product) (push) Has been skipped
Monorepo Pipeline / 🐳 Build Build-Base (push) Has been skipped
Monorepo Pipeline / 🐳 Build Production Runtime (push) Has been skipped
2026-03-04 11:07:01 +01:00
616d8a039b feat(gitea): add branch and event filters to pipeline discovery 2026-03-04 10:07:41 +01:00
ee3d7714c2 feat(mcps): migrate gitea and memory MCPs to SSE transport on pm2 2026-03-04 10:05:08 +01:00
ddf896e3f9 fix(gitea): prevent mcp server crash if token is missing 2026-03-03 20:53:47 +01:00
b9d0199115 fix(mcps): natively load .env for production start scripts 2026-03-03 19:40:50 +01:00
73 changed files with 5217 additions and 931 deletions

View File

@@ -0,0 +1,5 @@
---
"@mintel/next-config": patch
---
fix: add serverActions.allowedOrigins to support branch deployments

6
.env
View File

@@ -3,6 +3,7 @@ IMAGE_TAG=v1.8.19
PROJECT_NAME=at-mintel PROJECT_NAME=at-mintel
PROJECT_COLOR=#82ed20 PROJECT_COLOR=#82ed20
GITEA_TOKEN=ccce002e30fe16a31a6c9d5a414740af2f72a582 GITEA_TOKEN=ccce002e30fe16a31a6c9d5a414740af2f72a582
GITEA_HOST=https://git.infra.mintel.me
OPENROUTER_API_KEY=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 OPENROUTER_API_KEY=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
ZYTE_API_KEY=1f0f74726f044f55aaafc7ead32cd489 ZYTE_API_KEY=1f0f74726f044f55aaafc7ead32cd489
REPLICATE_API_KEY=r8_W3grtpXMRfi0u3AM9VdkKbuWdZMmhwU2Tn0yt REPLICATE_API_KEY=r8_W3grtpXMRfi0u3AM9VdkKbuWdZMmhwU2Tn0yt
@@ -11,6 +12,11 @@ DATA_FOR_SEO_API_KEY=bWFyY0BtaW50ZWwubWU6MjQ0YjBjZmIzOGY3NTIzZA==
DATA_FOR_SEO_LOGIN=marc@mintel.me DATA_FOR_SEO_LOGIN=marc@mintel.me
DATA_FOR_SEO_PASSWORD=244b0cfb38f7523d DATA_FOR_SEO_PASSWORD=244b0cfb38f7523d
# Kabelfachmann LLM Configuration
KABELFACHMANN_LLM_PROVIDER=openrouter
KABELFACHMANN_OLLAMA_MODEL=qwen3.5
KABELFACHMANN_OLLAMA_HOST=http://host.docker.internal:11434
# Authentication # Authentication
GATEKEEPER_PASSWORD=mintel GATEKEEPER_PASSWORD=mintel
AUTH_COOKIE_NAME=mintel_gatekeeper_session AUTH_COOKIE_NAME=mintel_gatekeeper_session

View File

@@ -1,5 +1,5 @@
# Project # Project
IMAGE_TAG=v1.9.10 IMAGE_TAG=v1.9.17
PROJECT_NAME=sample-website PROJECT_NAME=sample-website
PROJECT_COLOR=#82ed20 PROJECT_COLOR=#82ed20

View File

@@ -199,12 +199,31 @@ jobs:
- name: 🐳 Set up Docker Buildx - name: 🐳 Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: 🔐 Registry Login - name: 🔐 Discover Valid Registry Token
uses: docker/login-action@v3 id: discover_token
with: run: |
registry: git.infra.mintel.me echo "Testing available secrets against git.infra.mintel.me Docker registry..."
username: ${{ github.repository_owner }} TOKENS="${{ secrets.GITEA_PAT }} ${{ secrets.MINTEL_PRIVATE_TOKEN }} ${{ secrets.NPM_TOKEN }}"
password: ${{ secrets.NPM_TOKEN }} USERS="${{ github.repository_owner }} ${{ github.actor }} marcmintel mintel mmintel"
for TOKEN in $TOKENS; do
if [ -n "$TOKEN" ]; then
for U in $USERS; do
if [ -n "$U" ]; then
echo "Attempting docker login for a token with user $U..."
if echo "$TOKEN" | docker login git.infra.mintel.me -u "$U" --password-stdin > /dev/null 2>&1; then
echo "✅ Successfully authenticated with a token."
echo "::add-mask::$TOKEN"
echo "token=$TOKEN" >> $GITHUB_OUTPUT
echo "user=$U" >> $GITHUB_OUTPUT
exit 0
fi
fi
done
fi
done
echo "❌ All available tokens failed to authenticate!"
exit 1
- name: 🏗️ Build & Push ${{ matrix.name }} - name: 🏗️ Build & Push ${{ matrix.name }}
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
@@ -216,7 +235,7 @@ jobs:
provenance: false provenance: false
push: true push: true
secrets: | secrets: |
NPM_TOKEN=${{ secrets.NPM_TOKEN }} NPM_TOKEN=${{ steps.discover_token.outputs.token }}
tags: | tags: |
git.infra.mintel.me/mmintel/${{ matrix.image }}:${{ github.ref_name }} git.infra.mintel.me/mmintel/${{ matrix.image }}:${{ github.ref_name }}
git.infra.mintel.me/mmintel/${{ matrix.image }}:latest git.infra.mintel.me/mmintel/${{ matrix.image }}:latest

4
.gitignore vendored
View File

@@ -51,3 +51,7 @@ apps/web/out/estimations/
# Memory MCP # Memory MCP
data/qdrant/ data/qdrant/
packages/memory-mcp/models/ packages/memory-mcp/models/
# Kabelfachmann MCP
packages/kabelfachmann-mcp/data/
packages/kabelfachmann-mcp/models/

View File

@@ -1,6 +1,6 @@
{ {
"name": "sample-website", "name": "sample-website",
"version": "1.9.10", "version": "1.9.17",
"private": true, "private": true,
"type": "module", "type": "module",
"scripts": { "scripts": {

View File

@@ -3,14 +3,88 @@ services:
image: qdrant/qdrant:latest image: qdrant/qdrant:latest
container_name: qdrant-mcp container_name: qdrant-mcp
ports: ports:
- "6333:6333" - "6335:6333"
- "6334:6334" - "6336:6334"
volumes: volumes:
- ./data/qdrant:/qdrant/storage - ./data/qdrant:/qdrant/storage
restart: unless-stopped restart: unless-stopped
networks: networks:
- mcp-network - mcp-network
gitea-mcp:
build:
context: ./packages/gitea-mcp
container_name: gitea-mcp
env_file:
- .env
ports:
- "3001:3001"
restart: unless-stopped
networks:
- mcp-network
memory-mcp:
build:
context: ./packages/memory-mcp
container_name: memory-mcp
env_file:
- .env
ports:
- "3002:3002"
depends_on:
- qdrant
restart: unless-stopped
networks:
- mcp-network
umami-mcp:
build:
context: ./packages/umami-mcp
container_name: umami-mcp
env_file:
- .env
ports:
- "3003:3003"
restart: unless-stopped
networks:
- mcp-network
serpbear-mcp:
build:
context: ./packages/serpbear-mcp
container_name: serpbear-mcp
env_file:
- .env
ports:
- "3004:3004"
restart: unless-stopped
networks:
- mcp-network
glitchtip-mcp:
build:
context: ./packages/glitchtip-mcp
container_name: glitchtip-mcp
env_file:
- .env
ports:
- "3005:3005"
restart: unless-stopped
networks:
- mcp-network
klz-payload-mcp:
build:
context: ./packages/klz-payload-mcp
container_name: klz-payload-mcp
env_file:
- .env
ports:
- "3006:3006"
restart: unless-stopped
networks:
- mcp-network
networks: networks:
mcp-network: mcp-network:
driver: bridge driver: bridge

69
eslint-errors-2.txt Normal file
View File

@@ -0,0 +1,69 @@

/Users/marcmintel/Projects/at-mintel/packages/gitea-mcp/src/index.ts
 11:0 error Parsing error: Identifier expected

/Users/marcmintel/Projects/at-mintel/packages/glitchtip-mcp/src/index.ts
 124:19 warning 'res' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/klz-payload-mcp/src/index.ts
 39:18 warning 'e' is defined but never used. Allowed unused caught errors must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/memory-mcp/src/qdrant.test.ts
 7:52 warning 'text' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/page-audit/src/report.ts
 7:47 warning 'PageAuditData' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 7:62 warning 'AuditIssue' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/FieldGenerators/AiFieldButton.tsx
 11:13 warning 'value' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/FieldGenerators/GenerateSlugButton.tsx
 20:21 warning 'replaceState' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 21:13 warning 'value' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/FieldGenerators/GenerateThumbnailButton.tsx
 21:13 warning 'value' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/OptimizeButton.tsx
 5:10 warning 'Button' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/tools/mcpAdapter.ts
 44:15 warning 'toolSchema' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/tools/memoryDb.ts
 89:31 warning 'query' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/tools/payloadLocal.ts
 3:40 warning 'User' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/types.ts
 1:15 warning 'Plugin' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/ConceptPDF.tsx
 4:18 warning 'PDFPage' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 5:10 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/EstimationPDF.tsx
 4:18 warning 'PDFPage' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 5:10 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 54:11 warning 'getPageNum' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/InfoPDF.tsx
 5:13 warning 'PDFPage' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 12:5 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/pdf/SharedUI.tsx
 528:5 warning 'bankData' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/pdf/SimpleLayout.tsx
 4:52 warning 'PDFText' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 5:26 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/seo-engine/src/report.ts
 5:3 warning 'TopicCluster' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 6:3 warning 'ContentGap' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 7:3 warning 'CompetitorRanking' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

✖ 28 problems (1 error, 27 warnings)


97
eslint-errors.txt Normal file
View File

@@ -0,0 +1,97 @@

/Users/marcmintel/Projects/at-mintel/packages/gitea-mcp/src/index.ts
 12:5 warning 'Resource' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 14:10 warning 'z' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 427:30 warning 'e' is defined but never used. Allowed unused caught errors must match /^_/u @typescript-eslint/no-unused-vars
 745:50 error Unnecessary escape character: \/ no-useless-escape
 745:60 error Unnecessary escape character: \/ no-useless-escape
 799:54 error Unnecessary escape character: \/ no-useless-escape
 799:64 error Unnecessary escape character: \/ no-useless-escape

/Users/marcmintel/Projects/at-mintel/packages/glitchtip-mcp/src/index.ts
 124:19 warning 'res' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/klz-payload-mcp/src/index.ts
 39:18 warning 'e' is defined but never used. Allowed unused caught errors must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/memory-mcp/src/qdrant.test.ts
 7:52 warning 'text' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/page-audit/src/report.ts
 7:47 warning 'PageAuditData' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 7:62 warning 'AuditIssue' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/chatPlugin.ts
 1:15 warning 'Config' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 10:17 error 'config' is never reassigned. Use 'const' instead prefer-const
 48:37 warning 'req' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/ChatWindow/index.tsx
 43:5 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 44:63 warning 'setMessages' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/FieldGenerators/AiFieldButton.tsx
 11:13 warning 'value' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/FieldGenerators/GenerateSlugButton.tsx
 20:21 warning 'replaceState' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 21:13 warning 'value' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/FieldGenerators/GenerateThumbnailButton.tsx
 21:13 warning 'value' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/components/OptimizeButton.tsx
 5:10 warning 'Button' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/endpoints/chatEndpoint.ts
 96:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 100:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/tools/mcpAdapter.ts
 44:15 warning 'toolSchema' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 53:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/tools/memoryDb.ts
 50:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 88:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 89:31 warning 'query' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/tools/payloadLocal.ts
 3:40 warning 'User' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 25:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 45:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 61:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 78:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment
 95:13 error Use "@ts-expect-error" instead of "@ts-ignore", as "@ts-ignore" will do nothing if the following line is error-free @typescript-eslint/ban-ts-comment

/Users/marcmintel/Projects/at-mintel/packages/payload-ai/src/types.ts
 1:15 warning 'Plugin' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/ConceptPDF.tsx
 4:18 warning 'PDFPage' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 5:10 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/EstimationPDF.tsx
 4:18 warning 'PDFPage' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 5:10 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 54:11 warning 'getPageNum' is assigned a value but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/InfoPDF.tsx
 5:13 warning 'PDFPage' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 12:5 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/pdf/SharedUI.tsx
 528:5 warning 'bankData' is defined but never used. Allowed unused args must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/pdf-library/src/components/pdf/SimpleLayout.tsx
 4:52 warning 'PDFText' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 5:26 warning 'pdfStyles' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

/Users/marcmintel/Projects/at-mintel/packages/seo-engine/src/report.ts
 5:3 warning 'TopicCluster' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 6:3 warning 'ContentGap' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars
 7:3 warning 'CompetitorRanking' is defined but never used. Allowed unused vars must match /^_/u @typescript-eslint/no-unused-vars

✖ 49 problems (16 errors, 33 warnings)
 1 error and 0 warnings potentially fixable with the `--fix` option.


View File

@@ -6,10 +6,12 @@
"build": "pnpm -r build", "build": "pnpm -r build",
"dev": "pnpm -r dev", "dev": "pnpm -r dev",
"dev:gatekeeper": "bash -c 'trap \"COMPOSE_PROJECT_NAME=gatekeeper docker-compose -f docker-compose.gatekeeper.yml down\" EXIT INT TERM; docker network create infra 2>/dev/null || true && COMPOSE_PROJECT_NAME=gatekeeper docker-compose -f docker-compose.gatekeeper.yml down && COMPOSE_PROJECT_NAME=gatekeeper docker-compose -f docker-compose.gatekeeper.yml up --build --remove-orphans'", "dev:gatekeeper": "bash -c 'trap \"COMPOSE_PROJECT_NAME=gatekeeper docker-compose -f docker-compose.gatekeeper.yml down\" EXIT INT TERM; docker network create infra 2>/dev/null || true && COMPOSE_PROJECT_NAME=gatekeeper docker-compose -f docker-compose.gatekeeper.yml down && COMPOSE_PROJECT_NAME=gatekeeper docker-compose -f docker-compose.gatekeeper.yml up --build --remove-orphans'",
"dev:mcps:up": "docker-compose -f docker-compose.mcps.yml up -d", "dev:mcps:up": "docker-compose -f docker-compose.mcps.yml up -d --build --remove-orphans",
"dev:mcps:down": "docker-compose -f docker-compose.mcps.yml down", "dev:mcps:down": "docker-compose -f docker-compose.mcps.yml down",
"dev:mcps:watch": "pnpm -r --filter=\"./packages/*-mcp\" exec tsc -w", "dev:mcps:watch": "pnpm -r --filter=\"./packages/*-mcp\" exec tsc -w",
"dev:mcps": "npm run dev:mcps:up && npm run dev:mcps:watch", "dev:mcps": "npm run dev:mcps:up && npm run dev:mcps:watch",
"start:mcps": "npm run dev:mcps:up",
"start:mcps:force": "docker-compose -f docker-compose.mcps.yml up -d --build --force-recreate --remove-orphans",
"lint": "pnpm -r --filter='./packages/**' --filter='./apps/**' lint", "lint": "pnpm -r --filter='./packages/**' --filter='./apps/**' lint",
"test": "pnpm -r test", "test": "pnpm -r test",
"changeset": "changeset", "changeset": "changeset",
@@ -40,6 +42,7 @@
"husky": "^9.1.7", "husky": "^9.1.7",
"jsdom": "^27.4.0", "jsdom": "^27.4.0",
"lint-staged": "^16.2.7", "lint-staged": "^16.2.7",
"pm2": "^6.0.14",
"prettier": "^3.8.1", "prettier": "^3.8.1",
"tsx": "^4.21.0", "tsx": "^4.21.0",
"typescript": "^5.0.0", "typescript": "^5.0.0",
@@ -53,7 +56,7 @@
"pino-pretty": "^13.1.3", "pino-pretty": "^13.1.3",
"require-in-the-middle": "^8.0.1" "require-in-the-middle": "^8.0.1"
}, },
"version": "1.9.10", "version": "1.9.17",
"pnpm": { "pnpm": {
"onlyBuiltDependencies": [ "onlyBuiltDependencies": [
"@parcel/watcher", "@parcel/watcher",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/cli", "name": "@mintel/cli",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/cloner", "name": "@mintel/cloner",
"version": "1.9.10", "version": "1.9.17",
"type": "module", "type": "module",
"main": "dist/index.js", "main": "dist/index.js",
"module": "dist/index.js", "module": "dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/concept-engine", "name": "@mintel/concept-engine",
"version": "1.9.10", "version": "1.9.17",
"description": "AI-powered web project concept generation and analysis", "description": "AI-powered web project concept generation and analysis",
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/content-engine", "name": "@mintel/content-engine",
"version": "1.9.10", "version": "1.9.17",
"private": false, "private": false,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/eslint-config", "name": "@mintel/eslint-config",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/estimation-engine", "name": "@mintel/estimation-engine",
"version": "1.9.10", "version": "1.9.17",
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",
"module": "./dist/index.js", "module": "./dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/gatekeeper", "name": "@mintel/gatekeeper",
"version": "1.9.10", "version": "1.9.17",
"type": "module", "type": "module",
"scripts": { "scripts": {
"dev": "next dev", "dev": "next dev",

View File

@@ -15,5 +15,5 @@ COPY --from=builder /app/package.json ./
COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist COPY --from=builder /app/dist ./dist
# Use node to run the compiled index.js # Use node to run the compiled start.js
ENTRYPOINT ["node", "dist/index.js"] ENTRYPOINT ["node", "dist/start.js"]

View File

@@ -1,20 +1,22 @@
{ {
"name": "@mintel/gitea-mcp", "name": "@mintel/gitea-mcp",
"version": "1.9.10", "version": "1.9.17",
"description": "Native Gitea MCP server for 100% Antigravity compatibility", "description": "Native Gitea MCP server for 100% Antigravity compatibility",
"main": "dist/index.js", "main": "dist/index.js",
"type": "module", "type": "module",
"scripts": { "scripts": {
"build": "tsc", "build": "tsc",
"start": "node dist/index.js" "start": "node dist/start.js"
}, },
"dependencies": { "dependencies": {
"@modelcontextprotocol/sdk": "^1.5.0", "@modelcontextprotocol/sdk": "^1.5.0",
"zod": "^3.23.8", "axios": "^1.7.2",
"axios": "^1.7.2" "dotenv": "^17.3.1",
"express": "^5.2.1"
}, },
"devDependencies": { "devDependencies": {
"typescript": "^5.5.3", "@types/express": "^5.0.6",
"@types/node": "^20.14.10" "@types/node": "^20.14.10",
"typescript": "^5.5.3"
} }
} }

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
import { config } from 'dotenv';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
// Try to load .env.local first (contains credentials usually)
config({ quiet: true, path: resolve(__dirname, '../../../.env.local') });
// Fallback to .env (contains defaults)
config({ quiet: true, path: resolve(__dirname, '../../../.env') });
// Now boot the compiled MCP index
import('./index.js').catch(err => {
console.error('Failed to start MCP Server:', err);
process.exit(1);
});

View File

@@ -0,0 +1,15 @@
FROM node:20-bookworm-slim AS builder
WORKDIR /app
COPY package.json ./
RUN corepack enable pnpm && pnpm install --ignore-workspace
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:20-bookworm-slim
WORKDIR /app
COPY --from=builder /app/package.json ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
ENTRYPOINT ["node", "dist/start.js"]

View File

@@ -0,0 +1,24 @@
{
"name": "@mintel/glitchtip-mcp",
"version": "1.9.17",
"description": "GlitchTip Error Tracking MCP server for Mintel infrastructure",
"main": "dist/index.js",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/start.js",
"dev": "tsx watch src/index.ts"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.5.0",
"axios": "^1.7.2",
"dotenv": "^17.3.1",
"express": "^5.2.1"
},
"devDependencies": {
"@types/express": "^5.0.6",
"@types/node": "^20.14.10",
"tsx": "^4.19.2",
"typescript": "^5.5.3"
}
}

View File

@@ -0,0 +1,189 @@
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express, { Request, Response } from 'express';
import crypto from 'crypto';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
Tool,
} from "@modelcontextprotocol/sdk/types.js";
import axios from "axios";
import https from "https";
const GLITCHTIP_BASE_URL = process.env.GLITCHTIP_BASE_URL || "https://glitchtip.infra.mintel.me";
const GLITCHTIP_API_KEY = process.env.GLITCHTIP_API_KEY;
if (!GLITCHTIP_API_KEY) {
console.error("Warning: GLITCHTIP_API_KEY is not set. API calls will fail.");
}
const httpsAgent = new https.Agent({
rejectUnauthorized: false, // For internal infra
});
const glitchtipClient = axios.create({
baseURL: `${GLITCHTIP_BASE_URL}/api/0`,
headers: { Authorization: `Bearer ${GLITCHTIP_API_KEY}` },
httpsAgent
});
const LIST_PROJECTS_TOOL: Tool = {
name: "glitchtip_list_projects",
description: "List all projects and organizations in GlitchTip",
inputSchema: { type: "object", properties: {} },
};
const LIST_ISSUES_TOOL: Tool = {
name: "glitchtip_list_issues",
description: "List issues (errors) for a specific project",
inputSchema: {
type: "object",
properties: {
organization_slug: { type: "string", description: "The organization slug" },
project_slug: { type: "string", description: "The project slug" },
query: { type: "string", description: "Optional query filter (e.g., 'is:unresolved')" },
limit: { type: "number", description: "Maximum number of issues to return (default: 20)" },
},
required: ["organization_slug", "project_slug"],
},
};
const GET_ISSUE_DETAILS_TOOL: Tool = {
name: "glitchtip_get_issue_details",
description: "Get detailed information about a specific issue, including stack trace",
inputSchema: {
type: "object",
properties: {
issue_id: { type: "string", description: "The ID of the issue" },
},
required: ["issue_id"],
},
};
const UPDATE_ISSUE_TOOL: Tool = {
name: "glitchtip_update_issue",
description: "Update the status of an issue (e.g., resolve it)",
inputSchema: {
type: "object",
properties: {
issue_id: { type: "string", description: "The ID of the issue" },
status: { type: "string", enum: ["resolved", "unresolved", "ignored"], description: "The new status" },
},
required: ["issue_id", "status"],
},
};
const server = new Server(
{ name: "glitchtip-mcp", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
LIST_PROJECTS_TOOL,
LIST_ISSUES_TOOL,
GET_ISSUE_DETAILS_TOOL,
UPDATE_ISSUE_TOOL,
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "glitchtip_list_projects") {
try {
const res = await glitchtipClient.get('/projects/');
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "glitchtip_list_issues") {
const { organization_slug, project_slug, query, limit = 20 } = request.params.arguments as any;
try {
const res = await glitchtipClient.get(`/projects/${organization_slug}/${project_slug}/issues/`, {
params: { query, limit }
});
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "glitchtip_get_issue_details") {
const { issue_id } = request.params.arguments as any;
try {
const res = await glitchtipClient.get(`/issues/${issue_id}/`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "glitchtip_update_issue") {
const { issue_id, status } = request.params.arguments as any;
try {
const res = await glitchtipClient.put(`/issues/${issue_id}/`, { status });
return { content: [{ type: "text", text: `Issue ${issue_id} status updated to ${status}.` }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
async function run() {
const isStdio = process.argv.includes('--stdio');
if (isStdio) {
const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('GlitchTip MCP server is running on stdio');
} else {
const app = express();
const transports = new Map<string, SSEServerTransport>();
app.use((req, _res, next) => {
console.error(`${req.method} ${req.url}`);
next();
});
app.get('/sse', async (req, res) => {
const sessionId = crypto.randomUUID();
console.error(`New SSE connection: ${sessionId}`);
const transport = new SSEServerTransport(`/message/${sessionId}`, res);
transports.set(sessionId, transport);
req.on('close', () => {
console.error(`SSE connection closed: ${sessionId}`);
transports.delete(sessionId);
});
await server.connect(transport);
});
app.post('/message/:sessionId', async (req: Request, res: Response) => {
const sessionId = req.params.sessionId;
const transport = transports.get(sessionId as string);
if (!transport) {
console.error(`No transport found for session: ${sessionId}`);
res.status(400).send('No active SSE connection for this session');
return;
}
await transport.handlePostMessage(req, res);
});
const PORT = process.env.GLITCHTIP_MCP_PORT || 3005;
app.listen(PORT, () => {
console.error(`GlitchTip MCP server running on http://localhost:${PORT}/sse`);
});
}
}
run().catch((err) => {
console.error("Fatal error:", err);
process.exit(1);
});

View File

@@ -0,0 +1,13 @@
import { config } from 'dotenv';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
config({ quiet: true, path: resolve(__dirname, '../../../.env.local') });
config({ quiet: true, path: resolve(__dirname, '../../../.env') });
import('./index.js').catch(err => {
console.error('Failed to start GlitchTip MCP Server:', err);
process.exit(1);
});

View File

@@ -0,0 +1,16 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [
"src/**/*"
]
}

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/husky-config", "name": "@mintel/husky-config",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -177,12 +177,31 @@ jobs:
- name: 🐳 Set up Docker Buildx - name: 🐳 Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: 🔐 Registry Login - name: 🔐 Discover Valid Registry Token
uses: docker/login-action@v3 id: discover_token
with: run: |
registry: git.infra.mintel.me echo "Testing available secrets against git.infra.mintel.me Docker registry..."
username: ${{ github.repository_owner }} TOKENS="${{ secrets.GITEA_PAT }} ${{ secrets.MINTEL_PRIVATE_TOKEN }} ${{ secrets.NPM_TOKEN }}"
password: ${{ secrets.NPM_TOKEN }} USERS="${{ github.repository_owner }} ${{ github.actor }} marcmintel mintel mmintel"
for TOKEN in $TOKENS; do
if [ -n "$TOKEN" ]; then
for U in $USERS; do
if [ -n "$U" ]; then
echo "Attempting docker login for a token with user $U..."
if echo "$TOKEN" | docker login git.infra.mintel.me -u "$U" --password-stdin > /dev/null 2>&1; then
echo "✅ Successfully authenticated with a token."
echo "::add-mask::$TOKEN"
echo "token=$TOKEN" >> $GITHUB_OUTPUT
echo "user=$U" >> $GITHUB_OUTPUT
exit 0
fi
fi
done
fi
done
echo "❌ All available tokens failed to authenticate!"
exit 1
- name: 🏗️ Docker Build & Push - name: 🏗️ Docker Build & Push
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
@@ -197,7 +216,7 @@ jobs:
NEXT_PUBLIC_TARGET=${{ needs.prepare.outputs.target }} NEXT_PUBLIC_TARGET=${{ needs.prepare.outputs.target }}
push: true push: true
secrets: | secrets: |
NPM_TOKEN=${{ secrets.NPM_TOKEN }} NPM_TOKEN=${{ steps.discover_token.outputs.token }}
tags: git.infra.mintel.me/mmintel/${{ github.event.repository.name }}:${{ needs.prepare.outputs.image_tag }} tags: git.infra.mintel.me/mmintel/${{ github.event.repository.name }}:${{ needs.prepare.outputs.image_tag }}
# ────────────────────────────────────────────────────────────────────────────── # ──────────────────────────────────────────────────────────────────────────────
@@ -262,7 +281,7 @@ jobs:
set -e set -e
cd "/home/deploy/sites/${{ github.event.repository.name }}" cd "/home/deploy/sites/${{ github.event.repository.name }}"
chmod 600 "$ENV_FILE" chmod 600 "$ENV_FILE"
echo "${{ secrets.NPM_TOKEN }}" | docker login git.infra.mintel.me -u "${{ github.repository_owner }}" --password-stdin echo "${{ steps.discover_token.outputs.token }}" | docker login git.infra.mintel.me -u "${{ steps.discover_token.outputs.user }}" --password-stdin
docker compose -p "$PROJECT_NAME" --env-file "$ENV_FILE" pull docker compose -p "$PROJECT_NAME" --env-file "$ENV_FILE" pull
docker compose -p "$PROJECT_NAME" --env-file "$ENV_FILE" up -d --remove-orphans docker compose -p "$PROJECT_NAME" --env-file "$ENV_FILE" up -d --remove-orphans
docker system prune -f --filter "until=24h" docker system prune -f --filter "until=24h"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/infra", "name": "@mintel/infra",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -2,12 +2,24 @@
set -e set -e
# Configuration # Configuration
REGISTRY_DATA="/mnt/HC_Volume_104575103/registry-data/docker/registry/v2" REGISTRY_DATA="/mnt/HC_Volume_104796416/registry-data/docker/registry/v2"
KEEP_TAGS=3 KEEP_TAGS=3
echo "🏥 Starting Aggressive Mintel Infrastructure Optimization..." echo "🏥 Starting Aggressive Mintel Infrastructure Optimization..."
# 1. Prune Registry Tags (Filesystem level) # 1. Gitea Maintenance
echo "🍵 Running Gitea Maintenance..."
GITEA_CONTAINER=$(docker ps --format "{{.Names}}" | grep gitea | head -1 || true)
if [ -n "$GITEA_CONTAINER" ]; then
# Run common Gitea cleanup tasks
docker exec -u git "$GITEA_CONTAINER" gitea admin cron run cleanup_old_repository_archives || true
docker exec -u git "$GITEA_CONTAINER" gitea admin cron run cleanup_upload_directory || true
docker exec -u git "$GITEA_CONTAINER" gitea admin cron run cleanup_packages || true
docker exec -u git "$GITEA_CONTAINER" gitea admin cron run garbage_collect_attachment || true
docker exec -u git "$GITEA_CONTAINER" gitea admin cron run garbage_collect_lfs || true
fi
# 2. Prune Registry Tags (Filesystem level)
if [ -d "$REGISTRY_DATA" ]; then if [ -d "$REGISTRY_DATA" ]; then
echo "🔍 Processing Registry tags..." echo "🔍 Processing Registry tags..."
for repo_dir in "$REGISTRY_DATA/repositories/mintel/"*; do for repo_dir in "$REGISTRY_DATA/repositories/mintel/"*; do
@@ -47,4 +59,4 @@ docker system prune -af --filter "until=24h"
docker volume prune -f docker volume prune -f
echo "✅ Optimization complete!" echo "✅ Optimization complete!"
df -h /mnt/HC_Volume_104575103 df -h /mnt/HC_Volume_104796416

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/journaling", "name": "@mintel/journaling",
"version": "1.9.10", "version": "1.9.17",
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",
"module": "./dist/index.js", "module": "./dist/index.js",

View File

@@ -0,0 +1,15 @@
FROM node:20-bookworm-slim AS builder
WORKDIR /app
COPY package.json ./
RUN corepack enable pnpm && pnpm install --ignore-workspace
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:20-bookworm-slim
WORKDIR /app
COPY --from=builder /app/package.json ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
ENTRYPOINT ["node", "dist/start.js"]

View File

@@ -0,0 +1,24 @@
{
"name": "@mintel/klz-payload-mcp",
"version": "1.9.17",
"description": "KLZ PayloadCMS MCP server for technical product data and leads",
"main": "dist/index.js",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/start.js",
"dev": "tsx watch src/index.ts"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.27.1",
"axios": "^1.7.2",
"dotenv": "^17.3.1",
"express": "^5.2.1"
},
"devDependencies": {
"@types/express": "^5.0.6",
"@types/node": "^20.14.10",
"tsx": "^4.19.2",
"typescript": "^5.5.3"
}
}

View File

@@ -0,0 +1,635 @@
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express, { Request, Response } from 'express';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
Tool,
} from "@modelcontextprotocol/sdk/types.js";
import axios from "axios";
import crypto from "crypto";
import https from "https";
const PAYLOAD_URL = process.env.PAYLOAD_URL || "https://klz-cables.com";
const PAYLOAD_API_KEY = process.env.PAYLOAD_API_KEY;
const PAYLOAD_EMAIL = process.env.PAYLOAD_EMAIL || "agent@mintel.me";
const PAYLOAD_PASSWORD = process.env.PAYLOAD_PASSWORD || "agentpassword123";
const httpsAgent = new https.Agent({
rejectUnauthorized: false, // For internal infra
});
let jwtToken: string | null = null;
const payloadClient = axios.create({
baseURL: `${PAYLOAD_URL}/api`,
headers: PAYLOAD_API_KEY ? { Authorization: `users API-Key ${PAYLOAD_API_KEY}` } : {},
httpsAgent
});
payloadClient.interceptors.request.use(async (config) => {
if (!PAYLOAD_API_KEY && !jwtToken && PAYLOAD_EMAIL && PAYLOAD_PASSWORD) {
try {
const loginRes = await axios.post(`${PAYLOAD_URL}/api/users/login`, {
email: PAYLOAD_EMAIL,
password: PAYLOAD_PASSWORD
}, { httpsAgent });
if (loginRes.data && loginRes.data.token) {
jwtToken = loginRes.data.token;
}
} catch (e) {
console.error("Failed to authenticate with Payload CMS using email/password.");
}
}
if (jwtToken && !PAYLOAD_API_KEY) {
config.headers.Authorization = `JWT ${jwtToken}`;
}
return config;
});
payloadClient.interceptors.response.use(res => res, async (error) => {
const originalRequest = error.config;
// If token expired, clear it and retry
if (error.response?.status === 401 && !originalRequest._retry && !PAYLOAD_API_KEY) {
originalRequest._retry = true;
jwtToken = null; // Forces re-authentication on next interceptor run
return payloadClient(originalRequest);
}
return Promise.reject(error);
});
const SEARCH_PRODUCTS_TOOL: Tool = {
name: "payload_search_products",
description: "Search for technical product specifications (cables, cross-sections) in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query or part number" },
limit: { type: "number", description: "Maximum number of results" },
},
},
};
const GET_PRODUCT_TOOL: Tool = {
name: "payload_get_product",
description: "Get a specific product by its slug or ID",
inputSchema: {
type: "object",
properties: {
slug: { type: "string", description: "Product slug" },
id: { type: "string", description: "Product ID (if slug is not used)" }
},
},
};
const CREATE_PRODUCT_TOOL: Tool = {
name: "payload_create_product",
description: "Create a new product in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
title: { type: "string", description: "Product title" },
slug: { type: "string", description: "Product slug" },
data: { type: "object", description: "Additional product data (JSON)", additionalProperties: true }
},
required: ["title"]
},
};
const UPDATE_PRODUCT_TOOL: Tool = {
name: "payload_update_product",
description: "Update an existing product in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Product ID to update" },
data: { type: "object", description: "Product data to update (JSON)", additionalProperties: true }
},
required: ["id", "data"]
},
};
const DELETE_PRODUCT_TOOL: Tool = {
name: "payload_delete_product",
description: "Delete a product from KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Product ID to delete" }
},
required: ["id"]
},
};
const LIST_LEADS_TOOL: Tool = {
name: "payload_list_leads",
description: "List recent lead inquiries and contact requests",
inputSchema: {
type: "object",
properties: {
limit: { type: "number", description: "Maximum number of leads" },
},
},
};
const GET_LEAD_TOOL: Tool = {
name: "payload_get_lead",
description: "Get a specific lead by ID",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Lead ID" }
},
required: ["id"]
},
};
const CREATE_LEAD_TOOL: Tool = {
name: "payload_create_lead",
description: "Create a new lead in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
email: { type: "string", description: "Lead email address" },
data: { type: "object", description: "Additional lead data (JSON)", additionalProperties: true }
},
required: ["email"]
},
};
const UPDATE_LEAD_TOOL: Tool = {
name: "payload_update_lead",
description: "Update an existing lead in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Lead ID to update" },
data: { type: "object", description: "Lead data to update (JSON)", additionalProperties: true }
},
required: ["id", "data"]
},
};
const DELETE_LEAD_TOOL: Tool = {
name: "payload_delete_lead",
description: "Delete a lead from KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Lead ID to delete" }
},
required: ["id"]
},
};
const LIST_PAGES_TOOL: Tool = {
name: "payload_list_pages",
description: "List pages from KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
limit: { type: "number", description: "Maximum number of pages" },
},
},
};
const GET_PAGE_TOOL: Tool = {
name: "payload_get_page",
description: "Get a specific page by its slug or ID",
inputSchema: {
type: "object",
properties: {
slug: { type: "string", description: "Page slug" },
id: { type: "string", description: "Page ID (if slug is not used)" }
},
},
};
const LIST_POSTS_TOOL: Tool = {
name: "payload_list_posts",
description: "List posts/articles from KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
limit: { type: "number", description: "Maximum number of posts" },
},
},
};
const GET_POST_TOOL: Tool = {
name: "payload_get_post",
description: "Get a specific post by its slug or ID",
inputSchema: {
type: "object",
properties: {
slug: { type: "string", description: "Post slug" },
id: { type: "string", description: "Post ID (if slug is not used)" }
},
},
};
const CREATE_PAGE_TOOL: Tool = {
name: "payload_create_page",
description: "Create a new page in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
title: { type: "string", description: "Page title" },
slug: { type: "string", description: "Page slug" },
data: { type: "object", description: "Additional page data (JSON)", additionalProperties: true }
},
required: ["title"]
},
};
const UPDATE_PAGE_TOOL: Tool = {
name: "payload_update_page",
description: "Update an existing page in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Page ID to update" },
data: { type: "object", description: "Page data to update (JSON)", additionalProperties: true }
},
required: ["id", "data"]
},
};
const DELETE_PAGE_TOOL: Tool = {
name: "payload_delete_page",
description: "Delete a page from KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Page ID to delete" }
},
required: ["id"]
},
};
const CREATE_POST_TOOL: Tool = {
name: "payload_create_post",
description: "Create a new post in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
title: { type: "string", description: "Post title" },
slug: { type: "string", description: "Post slug" },
data: { type: "object", description: "Additional post data (JSON)", additionalProperties: true }
},
required: ["title"]
},
};
const UPDATE_POST_TOOL: Tool = {
name: "payload_update_post",
description: "Update an existing post in KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Post ID to update" },
data: { type: "object", description: "Post data to update (JSON)", additionalProperties: true }
},
required: ["id", "data"]
},
};
const DELETE_POST_TOOL: Tool = {
name: "payload_delete_post",
description: "Delete a post from KLZ Payload CMS",
inputSchema: {
type: "object",
properties: {
id: { type: "string", description: "Post ID to delete" }
},
required: ["id"]
},
};
const server = new Server(
{ name: "klz-payload-mcp", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
SEARCH_PRODUCTS_TOOL,
GET_PRODUCT_TOOL,
CREATE_PRODUCT_TOOL,
UPDATE_PRODUCT_TOOL,
DELETE_PRODUCT_TOOL,
LIST_LEADS_TOOL,
GET_LEAD_TOOL,
CREATE_LEAD_TOOL,
UPDATE_LEAD_TOOL,
DELETE_LEAD_TOOL,
LIST_PAGES_TOOL,
GET_PAGE_TOOL,
CREATE_PAGE_TOOL,
UPDATE_PAGE_TOOL,
DELETE_PAGE_TOOL,
LIST_POSTS_TOOL,
GET_POST_TOOL,
CREATE_POST_TOOL,
UPDATE_POST_TOOL,
DELETE_POST_TOOL
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "payload_search_products") {
const { query, limit = 10 } = request.params.arguments as any;
try {
const res = await payloadClient.get('/products', {
params: {
where: query ? {
or: [
{ title: { contains: query } },
{ slug: { contains: query } },
{ description: { contains: query } }
]
} : {},
limit
}
});
return { content: [{ type: "text", text: JSON.stringify(res.data.docs, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_get_product") {
const { slug, id } = request.params.arguments as any;
try {
if (id) {
const res = await payloadClient.get(`/products/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} else if (slug) {
const res = await payloadClient.get('/products', { params: { where: { slug: { equals: slug } }, limit: 1 } });
return { content: [{ type: "text", text: JSON.stringify(res.data.docs[0] || {}, null, 2) }] };
}
return { isError: true, content: [{ type: "text", text: "Error: must provide slug or id" }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_create_product") {
const { title, slug, data = {} } = request.params.arguments as any;
try {
const payload = { title, slug, _status: 'draft', ...data };
const res = await payloadClient.post('/products', payload);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_update_product") {
const { id, data } = request.params.arguments as any;
try {
const res = await payloadClient.patch(`/products/${id}`, data);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_delete_product") {
const { id } = request.params.arguments as any;
try {
const res = await payloadClient.delete(`/products/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_list_leads") {
const { limit = 10 } = request.params.arguments as any;
try {
const res = await payloadClient.get('/leads', {
params: { limit, sort: '-createdAt' }
});
return { content: [{ type: "text", text: JSON.stringify(res.data.docs, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_get_lead") {
const { id } = request.params.arguments as any;
try {
const res = await payloadClient.get(`/leads/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_create_lead") {
const { email, data = {} } = request.params.arguments as any;
try {
const payload = { email, ...data };
const res = await payloadClient.post('/leads', payload);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_update_lead") {
const { id, data } = request.params.arguments as any;
try {
const res = await payloadClient.patch(`/leads/${id}`, data);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_delete_lead") {
const { id } = request.params.arguments as any;
try {
const res = await payloadClient.delete(`/leads/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_list_pages") {
const { limit = 10 } = request.params.arguments as any;
try {
const res = await payloadClient.get('/pages', { params: { limit } });
return { content: [{ type: "text", text: JSON.stringify(res.data.docs, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_get_page") {
const { slug, id } = request.params.arguments as any;
try {
if (id) {
const res = await payloadClient.get(`/pages/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} else if (slug) {
const res = await payloadClient.get('/pages', { params: { where: { slug: { equals: slug } }, limit: 1 } });
return { content: [{ type: "text", text: JSON.stringify(res.data.docs[0] || {}, null, 2) }] };
}
return { isError: true, content: [{ type: "text", text: "Error: must provide slug or id" }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_create_page") {
const { title, slug, data = {} } = request.params.arguments as any;
try {
const payload = { title, slug, _status: 'draft', ...data };
const res = await payloadClient.post('/pages', payload);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_update_page") {
const { id, data } = request.params.arguments as any;
try {
const res = await payloadClient.patch(`/pages/${id}`, data);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_delete_page") {
const { id } = request.params.arguments as any;
try {
const res = await payloadClient.delete(`/pages/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_list_posts") {
const { limit = 10 } = request.params.arguments as any;
try {
const res = await payloadClient.get('/posts', { params: { limit, sort: '-createdAt' } });
return { content: [{ type: "text", text: JSON.stringify(res.data.docs, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_get_post") {
const { slug, id } = request.params.arguments as any;
try {
if (id) {
const res = await payloadClient.get(`/posts/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} else if (slug) {
const res = await payloadClient.get('/posts', { params: { where: { slug: { equals: slug } }, limit: 1 } });
return { content: [{ type: "text", text: JSON.stringify(res.data.docs[0] || {}, null, 2) }] };
}
return { isError: true, content: [{ type: "text", text: "Error: must provide slug or id" }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.response?.data?.errors?.[0]?.message || e.message}` }] };
}
}
if (request.params.name === "payload_create_post") {
const { title, slug, data = {} } = request.params.arguments as any;
try {
const payload = { title, slug, _status: 'draft', ...data };
const res = await payloadClient.post('/posts', payload);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_update_post") {
const { id, data } = request.params.arguments as any;
try {
const res = await payloadClient.patch(`/posts/${id}`, data);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
if (request.params.name === "payload_delete_post") {
const { id } = request.params.arguments as any;
try {
const res = await payloadClient.delete(`/posts/${id}`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${JSON.stringify(e.response?.data) || e.message}` }] };
}
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
async function run() {
const isStdio = process.argv.includes('--stdio');
if (isStdio) {
const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('KLZ Payload MCP server is running on stdio');
} else {
const app = express();
const transports = new Map<string, SSEServerTransport>();
app.use((req, _res, next) => {
console.error(`${req.method} ${req.url}`);
next();
});
app.get('/sse', async (req: Request, res: Response) => {
const sessionId = crypto.randomUUID();
console.error(`New SSE connection: ${sessionId}`);
const transport = new SSEServerTransport(`/message/${sessionId}`, res);
transports.set(sessionId, transport);
req.on('close', () => {
console.error(`SSE connection closed: ${sessionId}`);
transports.delete(sessionId);
});
await server.connect(transport);
});
app.post('/message/:sessionId', async (req: Request, res: Response) => {
const sessionId = req.params.sessionId;
const transport = transports.get(sessionId as string);
if (!transport) {
console.error(`No transport found for session: ${sessionId}`);
res.status(400).send('No active SSE connection for this session');
return;
}
await transport.handlePostMessage(req, res);
});
const PORT = process.env.KLZ_PAYLOAD_MCP_PORT || 3006;
app.listen(PORT, () => {
console.error(`KLZ Payload MCP server running on http://localhost:${PORT}/sse`);
});
}
}
run().catch((err) => {
console.error("Fatal error:", err);
process.exit(1);
});

View File

@@ -0,0 +1,13 @@
import { config } from 'dotenv';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
config({ quiet: true, path: resolve(__dirname, '../../../.env.local') });
config({ quiet: true, path: resolve(__dirname, '../../../.env') });
import('./index.js').catch(err => {
console.error('Failed to start KLZ Payload MCP Server:', err);
process.exit(1);
});

View File

@@ -0,0 +1,16 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [
"src/**/*"
]
}

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/mail", "name": "@mintel/mail",
"version": "1.9.10", "version": "1.9.17",
"private": false, "private": false,
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/meme-generator", "name": "@mintel/meme-generator",
"version": "1.9.10", "version": "1.9.17",
"private": false, "private": false,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -0,0 +1,18 @@
FROM node:20-bookworm-slim AS builder
WORKDIR /app
COPY package.json ./
RUN corepack enable pnpm && pnpm install --ignore-workspace
RUN for dir in $(find /app/node_modules -type d -name "sharp" | grep "node_modules/sharp$"); do \
echo "module.exports = {};" > "$dir/lib/index.js" || true; \
done
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:20-bookworm-slim
WORKDIR /app
COPY --from=builder /app/package.json ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
ENTRYPOINT ["node", "dist/start.js"]

View File

@@ -1,12 +1,12 @@
{ {
"name": "@mintel/memory-mcp", "name": "@mintel/memory-mcp",
"version": "1.9.10", "version": "1.9.17",
"description": "Local Qdrant-based Memory MCP server", "description": "Local Qdrant-based Memory MCP server",
"main": "dist/index.js", "main": "dist/index.js",
"type": "module", "type": "module",
"scripts": { "scripts": {
"build": "tsc", "build": "tsc",
"start": "node dist/index.js", "start": "node dist/start.js",
"dev": "tsx watch src/index.ts", "dev": "tsx watch src/index.ts",
"test:unit": "vitest run" "test:unit": "vitest run"
}, },
@@ -14,12 +14,16 @@
"@modelcontextprotocol/sdk": "^1.5.0", "@modelcontextprotocol/sdk": "^1.5.0",
"@qdrant/js-client-rest": "^1.12.0", "@qdrant/js-client-rest": "^1.12.0",
"@xenova/transformers": "^2.17.2", "@xenova/transformers": "^2.17.2",
"onnxruntime-node": "^1.14.0",
"dotenv": "^17.3.1",
"express": "^5.2.1",
"zod": "^3.23.8" "zod": "^3.23.8"
}, },
"devDependencies": { "devDependencies": {
"typescript": "^5.5.3", "@types/express": "^5.0.6",
"@types/node": "^20.14.10", "@types/node": "^20.14.10",
"tsx": "^4.19.1", "tsx": "^4.19.1",
"typescript": "^5.5.3",
"vitest": "^2.1.3" "vitest": "^2.1.3"
} }
} }

View File

@@ -1,5 +1,7 @@
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'; import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; import { SSEServerTransport } from '@modelcontextprotocol/sdk/server/sse.js';
import express from 'express';
import crypto from 'crypto';
import { z } from 'zod'; import { z } from 'zod';
import { QdrantMemoryService } from './qdrant.js'; import { QdrantMemoryService } from './qdrant.js';
@@ -11,14 +13,6 @@ async function main() {
const qdrantService = new QdrantMemoryService(process.env.QDRANT_URL || 'http://localhost:6333'); const qdrantService = new QdrantMemoryService(process.env.QDRANT_URL || 'http://localhost:6333');
// Initialize embedding model and Qdrant connection
try {
await qdrantService.initialize();
} catch (e) {
console.error('Failed to initialize local dependencies. Exiting.');
process.exit(1);
}
server.tool( server.tool(
'store_memory', 'store_memory',
'Store a new piece of knowledge/memory into the vector database. Use this to remember architectural decisions, preferences, aliases, etc.', 'Store a new piece of knowledge/memory into the vector database. Use this to remember architectural decisions, preferences, aliases, etc.',
@@ -67,12 +61,70 @@ async function main() {
} }
); );
const isStdio = process.argv.includes('--stdio');
if (isStdio) {
// Connect Stdio FIRST to avoid handshake timeouts while loading model
const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');
const transport = new StdioServerTransport(); const transport = new StdioServerTransport();
await server.connect(transport); await server.connect(transport);
console.error('Memory MCP server is running and ready to accept connections over stdio.'); console.error('Memory MCP server is running on stdio');
// Initialize dependency after connection
try {
await qdrantService.initialize();
} catch (e) {
console.error('Failed to initialize local dependencies:', e);
}
} else {
const app = express();
const transports = new Map<string, SSEServerTransport>();
app.use((req, _res, next) => {
console.error(`${req.method} ${req.url}`);
next();
});
app.get('/sse', async (req, res) => {
const sessionId = crypto.randomUUID();
console.error(`New SSE connection: ${sessionId}`);
const transport = new SSEServerTransport(`/message/${sessionId}`, res);
transports.set(sessionId, transport);
req.on('close', () => {
console.error(`SSE connection closed: ${sessionId}`);
transports.delete(sessionId);
});
await server.connect(transport);
});
app.post('/message/:sessionId', async (req, res) => {
const { sessionId } = req.params;
const transport = transports.get(sessionId as string);
if (!transport) {
console.error(`No transport found for session: ${sessionId}`);
res.status(400).send('No active SSE connection for this session');
return;
}
await transport.handlePostMessage(req, res);
});
const PORT = process.env.MEMORY_MCP_PORT || 3002;
app.listen(PORT, async () => {
console.error(`Memory MCP server running on http://localhost:${PORT}/sse`);
// Initialize dependencies in SSE mode on startup
try {
await qdrantService.initialize();
} catch (e) {
console.error('Failed to initialize local dependencies:', e);
}
});
}
} }
main().catch((error) => { main().catch((error) => {
console.error('Fatal error in main():', error); console.error('Fatal error:', error);
process.exit(1); process.exit(1);
}); });

View File

@@ -0,0 +1,16 @@
import { config } from 'dotenv';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
// Try to load .env.local first (contains credentials usually)
config({ quiet: true, path: resolve(__dirname, '../../../.env.local') });
// Fallback to .env (contains defaults)
config({ quiet: true, path: resolve(__dirname, '../../../.env') });
// Now boot the compiled MCP index
import('./index.js').catch(err => {
console.error('Failed to start MCP Server:', err);
process.exit(1);
});

View File

@@ -8,6 +8,9 @@ import path from "node:path";
export const baseNextConfig = { export const baseNextConfig = {
output: "standalone", output: "standalone",
turbopack: {}, turbopack: {},
serverActions: {
allowedOrigins: ["*.klz-cables.com", "*.branch.klz-cables.com", "localhost:3000", "*.mintel.me"],
},
images: { images: {
dangerouslyAllowSVG: true, dangerouslyAllowSVG: true,
contentDispositionType: "attachment", contentDispositionType: "attachment",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/next-config", "name": "@mintel/next-config",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/next-feedback", "name": "@mintel/next-feedback",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/next-observability", "name": "@mintel/next-observability",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/next-utils", "name": "@mintel/next-utils",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/observability", "name": "@mintel/observability",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/page-audit", "name": "@mintel/page-audit",
"version": "1.9.10", "version": "1.9.17",
"description": "AI-powered website IST-analysis using DataForSEO and Gemini", "description": "AI-powered website IST-analysis using DataForSEO and Gemini",
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/payload-ai", "name": "@mintel/payload-ai",
"version": "1.9.15", "version": "1.9.17",
"description": "Reusable Payload CMS AI Extensions", "description": "Reusable Payload CMS AI Extensions",
"type": "module", "type": "module",
"scripts": { "scripts": {
@@ -11,12 +11,13 @@
"types": "./dist/index.d.ts", "types": "./dist/index.d.ts",
"exports": { "exports": {
".": "./dist/index.js", ".": "./dist/index.js",
"./components/*": "./dist/components/*", "./components/FieldGenerators/*": "./dist/components/FieldGenerators/*.js",
"./actions/*": "./dist/actions/*", "./components/*": "./dist/components/*.js",
"./globals/*": "./dist/globals/*", "./actions/*": "./dist/actions/*.js",
"./endpoints/*": "./dist/endpoints/*", "./globals/*": "./dist/globals/*.js",
"./utils/*": "./dist/utils/*", "./endpoints/*": "./dist/endpoints/*.js",
"./tools/*": "./dist/tools/*" "./utils/*": "./dist/utils/*.js",
"./tools/*": "./dist/tools/*.js"
}, },
"peerDependencies": { "peerDependencies": {
"@payloadcms/next": ">=3.0.0", "@payloadcms/next": ">=3.0.0",

View File

@@ -1,76 +1,84 @@
import type { Config, Plugin } from 'payload' import type { Plugin } from "payload";
import { AIChatPermissionsCollection } from './collections/AIChatPermissions.js' import { AIChatPermissionsCollection } from "./collections/AIChatPermissions.js";
import type { PayloadChatPluginConfig } from './types.js' import type { PayloadChatPluginConfig } from "./types.js";
import { optimizePostEndpoint } from './endpoints/optimizeEndpoint.js' import { optimizePostEndpoint } from "./endpoints/optimizeEndpoint.js";
import { generateSlugEndpoint, generateThumbnailEndpoint, generateSingleFieldEndpoint } from './endpoints/generateEndpoints.js' import {
generateSlugEndpoint,
generateThumbnailEndpoint,
generateSingleFieldEndpoint,
} from "./endpoints/generateEndpoints.js";
export const payloadChatPlugin = export const payloadChatPlugin =
(pluginOptions: PayloadChatPluginConfig): Plugin => (pluginOptions: PayloadChatPluginConfig): Plugin =>
(incomingConfig) => { (incomingConfig) => {
let config = { ...incomingConfig } const config = { ...incomingConfig };
// If disabled, return config untouched // If disabled, return config untouched
if (pluginOptions.enabled === false) { if (pluginOptions.enabled === false) {
return config return config;
} }
// 1. Inject the Permissions Collection into the Schema // 1. Inject the Permissions Collection into the Schema
const existingCollections = config.collections || [] const existingCollections = config.collections || [];
const mcpServers = pluginOptions.mcpServers || [] const mcpServers = pluginOptions.mcpServers || [];
// Dynamically populate the select options for Collections and MCP Servers // Dynamically populate the select options for Collections and MCP Servers
const permissionCollection = { ...AIChatPermissionsCollection } const permissionCollection = { ...AIChatPermissionsCollection };
const collectionField = permissionCollection.fields.find(f => 'name' in f && f.name === 'allowedCollections') as any const collectionField = permissionCollection.fields.find(
(f) => "name" in f && f.name === "allowedCollections",
) as any;
if (collectionField) { if (collectionField) {
collectionField.options = existingCollections.map(c => ({ collectionField.options = existingCollections.map((c) => ({
label: c.labels?.singular || c.slug, label: c.labels?.singular || c.slug,
value: c.slug value: c.slug,
})) }));
} }
const mcpField = permissionCollection.fields.find(f => 'name' in f && f.name === 'allowedMcpServers') as any const mcpField = permissionCollection.fields.find(
(f) => "name" in f && f.name === "allowedMcpServers",
) as any;
if (mcpField) { if (mcpField) {
mcpField.options = mcpServers.map(s => ({ mcpField.options = mcpServers.map((s) => ({
label: s.name, label: s.name,
value: s.name value: s.name,
})) }));
} }
config.collections = [...existingCollections, permissionCollection] config.collections = [...existingCollections, permissionCollection];
// 2. Register Custom API Endpoint for the AI Chat // 2. Register Custom API Endpoint for the AI Chat
config.endpoints = [ config.endpoints = [
...(config.endpoints || []), ...(config.endpoints || []),
{ {
path: '/api/mcp-chat', path: "/api/mcp-chat",
method: 'post', method: "post",
handler: async (req) => { handler: async (_req) => {
// Fallback simple handler while developing endpoint logic // Fallback simple handler while developing endpoint logic
return Response.json({ message: "Chat endpoint active" }) return Response.json({ message: "Chat endpoint active" });
}, },
}, },
{ {
path: '/api/mintel-ai/optimize', path: "/api/mintel-ai/optimize",
method: 'post', method: "post",
handler: optimizePostEndpoint, handler: optimizePostEndpoint,
}, },
{ {
path: '/api/mintel-ai/generate-slug', path: "/api/mintel-ai/generate-slug",
method: 'post', method: "post",
handler: generateSlugEndpoint, handler: generateSlugEndpoint,
}, },
{ {
path: '/api/mintel-ai/generate-thumbnail', path: "/api/mintel-ai/generate-thumbnail",
method: 'post', method: "post",
handler: generateThumbnailEndpoint, handler: generateThumbnailEndpoint,
}, },
{ {
path: '/api/mintel-ai/generate-single-field', path: "/api/mintel-ai/generate-single-field",
method: 'post', method: "post",
handler: generateSingleFieldEndpoint, handler: generateSingleFieldEndpoint,
}, },
] ];
// 3. Inject Chat React Component into Admin UI // 3. Inject Chat React Component into Admin UI
if (pluginOptions.renderChatBubble !== false) { if (pluginOptions.renderChatBubble !== false) {
@@ -80,11 +88,11 @@ export const payloadChatPlugin =
...(config.admin?.components || {}), ...(config.admin?.components || {}),
providers: [ providers: [
...(config.admin?.components?.providers || []), ...(config.admin?.components?.providers || []),
'@mintel/payload-ai/components/ChatWindow#ChatWindowProvider', "@mintel/payload-ai/components/ChatWindow/index#ChatWindowProvider",
], ],
}, },
} };
} }
return config return config;
} };

View File

@@ -1,25 +1,53 @@
'use client' "use client";
import React, { useState } from 'react' import React, { useState, useEffect } from "react";
import { useChat } from '@ai-sdk/react' import { useChat } from "@ai-sdk/react";
import './ChatWindow.scss'
export const ChatWindowProvider: React.FC<{ children: React.ReactNode }> = ({ children }) => { export const ChatWindowProvider: React.FC<{ children: React.ReactNode }> = ({
children,
}) => {
return ( return (
<> <>
{children} {children}
<ChatWindow /> <ChatWindow />
</> </>
) );
} };
const ChatWindow: React.FC = () => { const ChatWindow: React.FC = () => {
const [isOpen, setIsOpen] = useState(false) const [isOpen, setIsOpen] = useState(false);
// @ts-ignore - AI hook version mismatch between core and react packages const [pageContext, setPageContext] = useState<any>({ url: "" });
const { messages, input, handleInputChange, handleSubmit, setMessages } = useChat({
api: '/api/mcp-chat', useEffect(() => {
initialMessages: [] if (typeof window !== "undefined") {
} as any) const path = window.location.pathname;
let collectionSlug = null;
let id = null;
// Payload admin URLs are usually /admin/collections/:slug/:id
const match = path.match(/\/collections\/([^/]+)(?:\/([^/]+))?/);
if (match) {
collectionSlug = match[1];
if (match[2] && match[2] !== "create") {
id = match[2];
}
}
setPageContext({
url: window.location.href,
title: document.title,
collectionSlug,
id,
});
}
}, [isOpen]); // Refresh context when chat is opened
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: "/api/mcp-chat",
initialMessages: [],
body: {
pageContext,
},
} as any) as any;
// Basic implementation to toggle chat window and submit messages // Basic implementation to toggle chat window and submit messages
return ( return (
@@ -28,81 +56,101 @@ const ChatWindow: React.FC = () => {
className="payload-mcp-chat-toggle" className="payload-mcp-chat-toggle"
onClick={() => setIsOpen(!isOpen)} onClick={() => setIsOpen(!isOpen)}
style={{ style={{
position: 'fixed', position: "fixed",
bottom: '20px', bottom: "20px",
right: '20px', right: "20px",
zIndex: 9999, zIndex: 9999,
padding: '12px 24px', padding: "12px 24px",
backgroundColor: '#000', backgroundColor: "#000",
color: '#fff', color: "#fff",
borderRadius: '8px', borderRadius: "8px",
border: 'none', border: "none",
cursor: 'pointer', cursor: "pointer",
fontWeight: 'bold' fontWeight: "bold",
}} }}
> >
{isOpen ? 'Close AI Chat' : 'Ask AI'} {isOpen ? "Close AI Chat" : "Ask AI"}
</button> </button>
{isOpen && ( {isOpen && (
<div <div
className="payload-mcp-chat-window" className="payload-mcp-chat-window"
style={{ style={{
position: 'fixed', position: "fixed",
bottom: '80px', bottom: "80px",
right: '20px', right: "20px",
width: '400px', width: "450px",
height: '600px', height: "650px",
backgroundColor: '#fff', backgroundColor: "#fff",
border: '1px solid #eaeaea', border: "1px solid #eaeaea",
borderRadius: '12px', borderRadius: "12px",
zIndex: 9999, zIndex: 9999,
display: 'flex', display: "flex",
flexDirection: 'column', flexDirection: "column",
boxShadow: '0 10px 40px rgba(0,0,0,0.1)' boxShadow: "0 10px 40px rgba(0,0,0,0.1)",
}} }}
> >
<div className="chat-header" style={{ padding: '16px', borderBottom: '1px solid #eaeaea', backgroundColor: '#f9f9f9', borderTopLeftRadius: '12px', borderTopRightRadius: '12px' }}> <div
<h3 style={{ margin: 0, fontSize: '16px' }}>Payload MCP Chat</h3> className="chat-header"
style={{
padding: "16px",
borderBottom: "1px solid #eaeaea",
backgroundColor: "#f9f9f9",
borderTopLeftRadius: "12px",
borderTopRightRadius: "12px",
}}
>
<h3 style={{ margin: 0, fontSize: "16px" }}>Payload MCP Chat</h3>
</div> </div>
<div className="chat-messages" style={{ flex: 1, padding: '16px', overflowY: 'auto' }}> <div
className="chat-messages"
style={{ flex: 1, padding: "16px", overflowY: "auto" }}
>
{messages.map((m: any) => ( {messages.map((m: any) => (
<div key={m.id} style={{ <div
marginBottom: '12px', key={m.id}
textAlign: m.role === 'user' ? 'right' : 'left' style={{
}}> marginBottom: "12px",
<div style={{ textAlign: m.role === "user" ? "right" : "left",
display: 'inline-block', }}
padding: '8px 12px', >
borderRadius: '8px', <div
backgroundColor: m.role === 'user' ? '#000' : '#f0f0f0', style={{
color: m.role === 'user' ? '#fff' : '#000', display: "inline-block",
maxWidth: '80%' padding: "8px 12px",
}}> borderRadius: "8px",
{m.role === 'user' ? 'G: ' : 'AI: '} backgroundColor: m.role === "user" ? "#000" : "#f0f0f0",
color: m.role === "user" ? "#fff" : "#000",
maxWidth: "80%",
}}
>
{m.role === "user" ? "G: " : "AI: "}
{m.content} {m.content}
</div> </div>
</div> </div>
))} ))}
</div> </div>
<form onSubmit={handleSubmit} style={{ padding: '16px', borderTop: '1px solid #eaeaea' }}> <form
onSubmit={handleSubmit}
style={{ padding: "16px", borderTop: "1px solid #eaeaea" }}
>
<input <input
value={input} value={input}
placeholder="Ask me anything or use /commands..." placeholder="Ask me anything or use /commands..."
onChange={handleInputChange} onChange={handleInputChange}
style={{ style={{
width: '100%', width: "100%",
padding: '12px', padding: "12px",
borderRadius: '8px', borderRadius: "8px",
border: '1px solid #eaeaea', border: "1px solid #eaeaea",
boxSizing: 'border-box' boxSizing: "border-box",
}} }}
/> />
</form> </form>
</div> </div>
)} )}
</div> </div>
) );
} };

View File

@@ -1,75 +1,142 @@
import { streamText } from 'ai' import { streamText } from "ai";
import { createOpenAI } from '@ai-sdk/openai' import { createOpenAI } from "@ai-sdk/openai";
import { generatePayloadLocalTools } from '../tools/payloadLocal.js' import { generatePayloadLocalTools } from "../tools/payloadLocal.js";
import { createMcpTools } from '../tools/mcpAdapter.js' import { createMcpTools } from "../tools/mcpAdapter.js";
import { generateMemoryTools } from '../tools/memoryDb.js' import { generateMemoryTools } from "../tools/memoryDb.js";
import type { PayloadRequest } from 'payload' import type { PayloadRequest } from "payload";
const openrouter = createOpenAI({ const openrouter = createOpenAI({
baseURL: 'https://openrouter.ai/api/v1', baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY || 'dummy_key', apiKey: process.env.OPENROUTER_API_KEY || "dummy_key",
}) });
export const handleMcpChat = async (req: PayloadRequest) => { export const handleMcpChat = async (req: PayloadRequest) => {
if (!req.user) { if (!req.user) {
return Response.json({ error: 'Unauthorized. You must be logged in to use AI Chat.' }, { status: 401 }) return Response.json(
{ error: "Unauthorized. You must be logged in to use AI Chat." },
{ status: 401 },
);
} }
const { messages } = (await req.json?.() || { messages: [] }) as { messages: any[] } const { messages, pageContext } = ((await req.json?.()) || {
messages: [],
}) as { messages: any[]; pageContext?: any };
// 1. Check AI Permissions for req.user // 1. Check AI Permissions for req.user
// In a real implementation this looks up the global or collection for permissions // Look up the collection for permissions
const allowedCollections = ['users'] // Stub const permissionsQuery = await req.payload.find({
let activeTools: Record<string, any> = {} collection: "ai-chat-permissions" as any,
where: {
or: [
{ targetUser: { equals: req.user.id } },
{ targetRole: { equals: req.user.role || "admin" } },
],
},
limit: 10,
});
const allowedCollections = new Set<string>();
const allowedMcpServers = new Set<string>();
for (const perm of permissionsQuery.docs) {
if (perm.allowedCollections) {
perm.allowedCollections.forEach((c: string) => allowedCollections.add(c));
}
if (perm.allowedMcpServers) {
perm.allowedMcpServers.forEach((s: string) => allowedMcpServers.add(s));
}
}
let accessCollections = Array.from(allowedCollections);
if (accessCollections.length === 0) {
// Fallback or demo config if not configured yet
accessCollections = [
"users",
"pages",
"posts",
"products",
"leads",
"media",
];
}
let activeTools: Record<string, any> = {};
// 2. Generate Payload Local Tools // 2. Generate Payload Local Tools
if (allowedCollections.length > 0) { if (accessCollections.length > 0) {
const payloadTools = generatePayloadLocalTools(req.payload, req, allowedCollections) const payloadTools = generatePayloadLocalTools(
activeTools = { ...activeTools, ...payloadTools } req.payload,
req,
accessCollections,
);
activeTools = { ...activeTools, ...payloadTools };
} }
// 3. Connect External MCPs // 3. Connect External MCPs
const allowedMcpServers: string[] = [] // Stub if (Array.from(allowedMcpServers).includes("gitea")) {
if (allowedMcpServers.includes('gitea')) {
try { try {
const { tools: giteaTools } = await createMcpTools({ const { tools: giteaTools } = await createMcpTools({
name: 'gitea', name: "gitea",
command: 'npx', command: "npx",
args: ['-y', '@modelcontextprotocol/server-gitea', '--url', 'https://git.mintel.int', '--token', process.env.GITEA_TOKEN || ''] args: [
}) "-y",
activeTools = { ...activeTools, ...giteaTools } "@modelcontextprotocol/server-gitea",
"--url",
"https://git.mintel.int",
"--token",
process.env.GITEA_TOKEN || "",
],
});
activeTools = { ...activeTools, ...giteaTools };
} catch (e) { } catch (e) {
console.error('Failed to connect to Gitea MCP', e) console.error("Failed to connect to Gitea MCP", e);
} }
} }
// 4. Inject Memory Database Tools // 4. Inject Memory Database Tools
// We provide the user ID so memory is partitioned per user // We provide the user ID so memory is partitioned per user
const memoryTools = generateMemoryTools(req.user.id) const memoryTools = generateMemoryTools(req.user.id);
activeTools = { ...activeTools, ...memoryTools } activeTools = { ...activeTools, ...memoryTools };
// 5. Build prompt to ensure it asks before saving // 5. Build prompt to ensure it asks before saving
const memorySystemPrompt = ` const memorySystemPrompt = `
You have access to a long-term vector memory database (Qdrant). You have access to a long-term vector memory database (Qdrant).
If the user says "speicher das", "merk dir das", "vergiss das nicht" etc., you MUST use the save_memory tool. If the user says "speicher das", "merk dir das", "vergiss das nicht" etc., you MUST use the save_memory tool.
If the user shares important context but doesn't explicitly ask you to remember it, you should ask "Soll ich mir das für die Zukunft merken?" before saving it. Do not ask for trivial things. If the user shares important context but doesn't explicitly ask you to remember it, you should ask "Soll ich mir das für die Zukunft merken?" before saving it. Do not ask for trivial things.
`;
const contextContextStr = pageContext
? `
Current User Context:
URL: ${pageContext.url || "Unknown"}
Title: ${pageContext.title || "Unknown"}
Collection: ${pageContext.collectionSlug || "None"}
Document ID: ${pageContext.id || "None"}
You can use this to understand what the user is currently looking at.
` `
: "";
try { try {
const result = streamText({ const result = streamText({
// @ts-ignore - AI SDK type mismatch model: openrouter("google/gemini-3.0-flash"),
model: openrouter('google/gemini-3.0-flash'),
messages, messages,
tools: activeTools, tools: activeTools,
system: `You are a helpful Payload CMS MCP Assistant orchestrating the local Mintel ecosystem. // @ts-expect-error - AI SDK type mismatch with maxSteps
maxSteps: 10,
system: `You are a helpful Payload CMS Agent orchestrating the local Mintel ecosystem.
You only have access to tools explicitly granted by the Admin. You only have access to tools explicitly granted by the Admin.
You cannot do anything outside these tools. Always explain what you are doing. You can completely control Payload CMS (read, create, update, delete documents).
${memorySystemPrompt}` If you need more details to fulfill a request (e.g. creating a blog post), you can ask the user.
}) ${contextContextStr}
${memorySystemPrompt}`,
});
return result.toTextStreamResponse() return result.toTextStreamResponse();
} catch (error) { } catch (error) {
console.error("AI Error:", error) console.error("AI Error:", error);
return Response.json({ error: 'Failed to process AI request' }, { status: 500 }) return Response.json(
{ error: "Failed to process AI request" },
{ status: 500 },
);
} }
} };

View File

@@ -28,7 +28,13 @@ async function getOrchestrator() {
export const generateSlugEndpoint = async (req: PayloadRequest) => { export const generateSlugEndpoint = async (req: PayloadRequest) => {
try { try {
const { title, draftContent, oldSlug, instructions } = (await req.json?.() || {}) as any; let body: any = {};
try {
if (req.body) body = (await req.json?.()) || {};
} catch {
/* ignore */
}
const { title, draftContent, oldSlug, instructions } = body;
const orchestrator = await getOrchestrator(); const orchestrator = await getOrchestrator();
const newSlug = await orchestrator.generateSlug( const newSlug = await orchestrator.generateSlug(
draftContent, draftContent,
@@ -50,11 +56,17 @@ export const generateSlugEndpoint = async (req: PayloadRequest) => {
} catch (e: any) { } catch (e: any) {
return Response.json({ success: false, error: e.message }, { status: 500 }); return Response.json({ success: false, error: e.message }, { status: 500 });
} }
} };
export const generateThumbnailEndpoint = async (req: PayloadRequest) => { export const generateThumbnailEndpoint = async (req: PayloadRequest) => {
try { try {
const { draftContent, title, instructions } = (await req.json?.() || {}) as any; let body: any = {};
try {
if (req.body) body = (await req.json?.()) || {};
} catch {
/* ignore */
}
const { draftContent, title, instructions } = body;
const OPENROUTER_KEY = const OPENROUTER_KEY =
process.env.OPENROUTER_KEY || process.env.OPENROUTER_API_KEY; process.env.OPENROUTER_KEY || process.env.OPENROUTER_API_KEY;
const REPLICATE_KEY = process.env.REPLICATE_API_KEY; const REPLICATE_KEY = process.env.REPLICATE_API_KEY;
@@ -62,9 +74,16 @@ export const generateThumbnailEndpoint = async (req: PayloadRequest) => {
if (!OPENROUTER_KEY) throw new Error("Missing OPENROUTER_API_KEY in .env"); if (!OPENROUTER_KEY) throw new Error("Missing OPENROUTER_API_KEY in .env");
if (!REPLICATE_KEY) throw new Error("Missing REPLICATE_API_KEY in .env"); if (!REPLICATE_KEY) throw new Error("Missing REPLICATE_API_KEY in .env");
const importDynamic = new Function("modulePath", "return import(modulePath)"); const importDynamic = new Function(
const { AiBlogPostOrchestrator } = await importDynamic("@mintel/content-engine"); "modulePath",
const { ThumbnailGenerator } = await importDynamic("@mintel/thumbnail-generator"); "return import(modulePath)",
);
const { AiBlogPostOrchestrator } = await importDynamic(
"@mintel/content-engine",
);
const { ThumbnailGenerator } = await importDynamic(
"@mintel/thumbnail-generator",
);
const orchestrator = new AiBlogPostOrchestrator({ const orchestrator = new AiBlogPostOrchestrator({
apiKey: OPENROUTER_KEY, apiKey: OPENROUTER_KEY,
@@ -99,17 +118,29 @@ export const generateThumbnailEndpoint = async (req: PayloadRequest) => {
}, },
}); });
await fs.unlink(tmpPath).catch(() => { }); await fs.unlink(tmpPath).catch(() => {});
return Response.json({ success: true, mediaId: newMedia.id }); return Response.json({ success: true, mediaId: newMedia.id });
} catch (e: any) { } catch (e: any) {
return Response.json({ success: false, error: e.message }, { status: 500 }); return Response.json({ success: false, error: e.message }, { status: 500 });
} }
} };
export const generateSingleFieldEndpoint = async (req: PayloadRequest) => { export const generateSingleFieldEndpoint = async (req: PayloadRequest) => {
try { try {
const { documentTitle, documentContent, fieldName, fieldDescription, instructions } = (await req.json?.() || {}) as any; let body: any = {};
try {
if (req.body) body = (await req.json?.()) || {};
} catch {
/* ignore */
}
const {
documentTitle,
documentContent,
fieldName,
fieldDescription,
instructions,
} = body;
const OPENROUTER_KEY = const OPENROUTER_KEY =
process.env.OPENROUTER_KEY || process.env.OPENROUTER_API_KEY; process.env.OPENROUTER_KEY || process.env.OPENROUTER_API_KEY;
@@ -155,4 +186,4 @@ CRITICAL RULES:
} catch (e: any) { } catch (e: any) {
return Response.json({ success: false, error: e.message }, { status: 500 }); return Response.json({ success: false, error: e.message }, { status: 500 });
} }
} };

View File

@@ -1,15 +1,33 @@
import { PayloadRequest } from 'payload' import { PayloadRequest } from "payload";
import { parseMarkdownToLexical } from "../utils/lexicalParser.js"; import { parseMarkdownToLexical } from "../utils/lexicalParser.js";
export const optimizePostEndpoint = async (req: PayloadRequest) => { export const optimizePostEndpoint = async (req: PayloadRequest) => {
try { try {
const { draftContent, instructions } = (await req.json?.() || {}) as { draftContent: string; instructions?: string }; let body: any = {};
try {
if (!draftContent) { if (req.body) {
return Response.json({ error: 'Missing draftContent' }, { status: 400 }) // req.json() acts as a method in Next.js/Payload req wrapper
body = (await req.json?.()) || {};
}
} catch (e) {
// Ignore JSON parse error, body remains empty
} }
const globalAiSettings = (await req.payload.findGlobal({ slug: "ai-settings" })) as any; const { draftContent, instructions } = body as {
draftContent?: string;
instructions?: string;
};
if (!draftContent) {
return Response.json(
{ success: false, error: "Missing draftContent" },
{ status: 400 },
);
}
const globalAiSettings = (await req.payload.findGlobal({
slug: "ai-settings",
})) as any;
const customSources = const customSources =
globalAiSettings?.customSources?.map((s: any) => s.sourceName) || []; globalAiSettings?.customSources?.map((s: any) => s.sourceName) || [];
@@ -18,12 +36,20 @@ export const optimizePostEndpoint = async (req: PayloadRequest) => {
const REPLICATE_KEY = process.env.REPLICATE_API_KEY; const REPLICATE_KEY = process.env.REPLICATE_API_KEY;
if (!OPENROUTER_KEY) { if (!OPENROUTER_KEY) {
return Response.json({ error: "OPENROUTER_KEY not found in environment." }, { status: 500 }) return Response.json(
{ error: "OPENROUTER_KEY not found in environment." },
{ status: 500 },
);
} }
// Dynamically import to avoid bundling it into client components that might accidentally import this file // Dynamically import to avoid bundling it into client components that might accidentally import this file
const importDynamic = new Function("modulePath", "return import(modulePath)"); const importDynamic = new Function(
const { AiBlogPostOrchestrator } = await importDynamic("@mintel/content-engine"); "modulePath",
"return import(modulePath)",
);
const { AiBlogPostOrchestrator } = await importDynamic(
"@mintel/content-engine",
);
const orchestrator = new AiBlogPostOrchestrator({ const orchestrator = new AiBlogPostOrchestrator({
apiKey: OPENROUTER_KEY, apiKey: OPENROUTER_KEY,
@@ -47,7 +73,10 @@ export const optimizePostEndpoint = async (req: PayloadRequest) => {
}); });
if (!optimizedMarkdown || typeof optimizedMarkdown !== "string") { if (!optimizedMarkdown || typeof optimizedMarkdown !== "string") {
return Response.json({ error: "AI returned invalid markup." }, { status: 500 }) return Response.json(
{ error: "AI returned invalid markup." },
{ status: 500 },
);
} }
const blocks = parseMarkdownToLexical(optimizedMarkdown); const blocks = parseMarkdownToLexical(optimizedMarkdown);
@@ -64,12 +93,16 @@ export const optimizePostEndpoint = async (req: PayloadRequest) => {
direction: "ltr", direction: "ltr",
}, },
}, },
}) });
} catch (error: any) { } catch (error: any) {
console.error("Failed to optimize post in endpoint:", error); console.error("Failed to optimize post in endpoint:", error);
return Response.json({ return Response.json(
{
success: false, success: false,
error: error.message || "An unknown error occurred during optimization.", error:
}, { status: 500 }) error.message || "An unknown error occurred during optimization.",
},
{ status: 500 },
);
} }
} };

View File

@@ -1,39 +1,44 @@
import { Client } from '@modelcontextprotocol/sdk/client/index.js' import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js' import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js' import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { tool } from 'ai' import { tool } from "ai";
import { z } from 'zod' import { z } from "zod";
/** /**
* Connects to an external MCP Server and maps its tools to Vercel AI SDK Tools. * Connects to an external MCP Server and maps its tools to Vercel AI SDK Tools.
*/ */
export async function createMcpTools(mcpConfig: { name: string, url?: string, command?: string, args?: string[] }) { export async function createMcpTools(mcpConfig: {
let transport name: string;
url?: string;
command?: string;
args?: string[];
}) {
let transport;
// Support both HTTP/SSE and STDIO transports // Support both HTTP/SSE and STDIO transports
if (mcpConfig.url) { if (mcpConfig.url) {
transport = new SSEClientTransport(new URL(mcpConfig.url)) transport = new SSEClientTransport(new URL(mcpConfig.url));
} else if (mcpConfig.command) { } else if (mcpConfig.command) {
transport = new StdioClientTransport({ transport = new StdioClientTransport({
command: mcpConfig.command, command: mcpConfig.command,
args: mcpConfig.args || [], args: mcpConfig.args || [],
}) });
} else { } else {
throw new Error('Invalid MCP config: Must provide either URL or Command.') throw new Error("Invalid MCP config: Must provide either URL or Command.");
} }
const client = new Client( const client = new Client(
{ name: `payload-ai-client-${mcpConfig.name}`, version: '1.0.0' }, { name: `payload-ai-client-${mcpConfig.name}`, version: "1.0.0" },
{ capabilities: {} } { capabilities: {} },
) );
await client.connect(transport) await client.connect(transport);
// Fetch available tools from the external MCP server // Fetch available tools from the external MCP server
const toolListResult = await client.listTools() const toolListResult = await client.listTools();
const externalTools = toolListResult.tools || [] const externalTools = toolListResult.tools || [];
const aiSdkTools: Record<string, any> = {} const aiSdkTools: Record<string, any> = {};
// Map each external tool to a Vercel AI SDK Tool // Map each external tool to a Vercel AI SDK Tool
for (const extTool of externalTools) { for (const extTool of externalTools) {
@@ -41,7 +46,7 @@ export async function createMcpTools(mcpConfig: { name: string, url?: string, co
// Note: For a production ready adapter, you might need a more robust jsonSchemaToZod converter // Note: For a production ready adapter, you might need a more robust jsonSchemaToZod converter
// or use AI SDK's new experimental generateSchema feature if available. // or use AI SDK's new experimental generateSchema feature if available.
// Here we use a generic `z.any()` as a fallback since AI SDK requires a Zod schema. // Here we use a generic `z.any()` as a fallback since AI SDK requires a Zod schema.
const toolSchema = extTool.inputSchema as Record<string, any> const toolSchema = extTool.inputSchema as Record<string, any>;
// We create a simplified parameter parser. // We create a simplified parameter parser.
// An ideal approach uses `jsonSchemaToZod` library or native AI SDK JSON schema support // An ideal approach uses `jsonSchemaToZod` library or native AI SDK JSON schema support
@@ -49,17 +54,19 @@ export async function createMcpTools(mcpConfig: { name: string, url?: string, co
aiSdkTools[`${mcpConfig.name}_${extTool.name}`] = tool({ aiSdkTools[`${mcpConfig.name}_${extTool.name}`] = tool({
description: `[From ${mcpConfig.name}] ${extTool.description || extTool.name}`, description: `[From ${mcpConfig.name}] ${extTool.description || extTool.name}`,
parameters: z.any().describe('JSON matching the original MCP input_schema'), // Simplify for prototype parameters: z
// @ts-ignore - AI strict mode overload bug with implicit zod inferences .any()
.describe("JSON matching the original MCP input_schema"), // Simplify for prototype
// @ts-expect-error - AI strict mode overload bug with implicit zod inferences
execute: async (args: any) => { execute: async (args: any) => {
const result = await client.callTool({ const result = await client.callTool({
name: extTool.name, name: extTool.name,
arguments: args arguments: args,
}) });
return result return result;
} },
}) });
} }
return { tools: aiSdkTools, client } return { tools: aiSdkTools, client };
} }

View File

@@ -1,37 +1,39 @@
import { tool } from 'ai' import { tool } from "ai";
import { z } from 'zod' import { z } from "zod";
import { QdrantClient } from '@qdrant/js-client-rest' import { QdrantClient } from "@qdrant/js-client-rest";
// Qdrant initialization // Qdrant initialization
// This requires the user to have Qdrant running and QDRANT_URL/QDRANT_API_KEY environment variables set // This requires the user to have Qdrant running and QDRANT_URL/QDRANT_API_KEY environment variables set
const qdrantClient = new QdrantClient({ const qdrantClient = new QdrantClient({
url: process.env.QDRANT_URL || 'http://localhost:6333', url: process.env.QDRANT_URL || "http://localhost:6333",
apiKey: process.env.QDRANT_API_KEY, apiKey: process.env.QDRANT_API_KEY,
}) });
const MEMORY_COLLECTION = 'mintel_ai_memory' const MEMORY_COLLECTION = "mintel_ai_memory";
// Ensure collection exists on load // Ensure collection exists on load
async function initQdrant() { async function initQdrant() {
try { try {
const res = await qdrantClient.getCollections() const res = await qdrantClient.getCollections();
const exists = res.collections.find((c: any) => c.name === MEMORY_COLLECTION) const exists = res.collections.find(
(c: any) => c.name === MEMORY_COLLECTION,
);
if (!exists) { if (!exists) {
await qdrantClient.createCollection(MEMORY_COLLECTION, { await qdrantClient.createCollection(MEMORY_COLLECTION, {
vectors: { vectors: {
size: 1536, // typical embedding size, adjust based on the embedding model used size: 1536, // typical embedding size, adjust based on the embedding model used
distance: 'Cosine', distance: "Cosine",
}, },
}) });
console.log(`Qdrant collection '${MEMORY_COLLECTION}' created.`) console.log(`Qdrant collection '${MEMORY_COLLECTION}' created.`);
} }
} catch (error) { } catch (error) {
console.error('Failed to initialize Qdrant memory collection:', error) console.error("Failed to initialize Qdrant memory collection:", error);
} }
} }
// Call init, but don't block // Call init, but don't block
initQdrant() initQdrant();
/** /**
* Returns memory tools for the AI SDK. * Returns memory tools for the AI SDK.
@@ -42,20 +44,34 @@ initQdrant()
export const generateMemoryTools = (userId: string | number) => { export const generateMemoryTools = (userId: string | number) => {
return { return {
save_memory: tool({ save_memory: tool({
description: 'Save an important preference, fact, or instruction about the user to long-term memory. Only use this when explicitly asked or when it is clearly a long-term preference.', description:
"Save an important preference, fact, or instruction about the user to long-term memory. Only use this when explicitly asked or when it is clearly a long-term preference.",
parameters: z.object({ parameters: z.object({
fact: z.string().describe('The fact or instruction to remember.'), fact: z.string().describe("The fact or instruction to remember."),
category: z.string().optional().describe('An optional category like "preference", "rule", or "project_detail".'), category: z
.string()
.optional()
.describe(
'An optional category like "preference", "rule", or "project_detail".',
),
}), }),
// @ts-ignore - AI SDK strict mode bug // @ts-expect-error - AI SDK strict mode bug
execute: async ({ fact, category }: { fact: string; category?: string }) => { execute: async ({
fact,
category,
}: {
fact: string;
category?: string;
}) => {
// In a real scenario, you MUST generate embeddings for the 'fact' string here // In a real scenario, you MUST generate embeddings for the 'fact' string here
// using OpenAI or another embedding provider before inserting into Qdrant. // using OpenAI or another embedding provider before inserting into Qdrant.
// const embedding = await generateEmbedding(fact) // const embedding = await generateEmbedding(fact)
try { try {
// Mock embedding payload for demonstration // Mock embedding payload for demonstration
const mockEmbedding = new Array(1536).fill(0).map(() => Math.random()) const mockEmbedding = new Array(1536)
.fill(0)
.map(() => Math.random());
await qdrantClient.upsert(MEMORY_COLLECTION, { await qdrantClient.upsert(MEMORY_COLLECTION, {
wait: true, wait: true,
@@ -71,24 +87,33 @@ export const generateMemoryTools = (userId: string | number) => {
}, },
}, },
], ],
}) });
return { success: true, message: `Successfully remembered: "${fact}"` } return {
success: true,
message: `Successfully remembered: "${fact}"`,
};
} catch (error) { } catch (error) {
console.error("Qdrant save error:", error) console.error("Qdrant save error:", error);
return { success: false, error: 'Failed to save to memory database.' } return {
success: false,
error: "Failed to save to memory database.",
};
} }
}, },
}), }),
search_memory: tool({ search_memory: tool({
description: 'Search the user\'s long-term memory for past factual context, preferences, or rules.', description:
"Search the user's long-term memory for past factual context, preferences, or rules.",
parameters: z.object({ parameters: z.object({
query: z.string().describe('The search string to find in memory.'), query: z.string().describe("The search string to find in memory."),
}), }),
// @ts-ignore - AI SDK strict mode bug // @ts-expect-error - AI SDK strict mode bug
execute: async ({ query }: { query: string }) => { execute: async ({ query }: { query: string }) => {
// Generate embedding for query // Generate embedding for query
const mockQueryEmbedding = new Array(1536).fill(0).map(() => Math.random()) const mockQueryEmbedding = new Array(1536)
.fill(0)
.map(() => Math.random());
try { try {
const results = await qdrantClient.search(MEMORY_COLLECTION, { const results = await qdrantClient.search(MEMORY_COLLECTION, {
@@ -97,19 +122,19 @@ export const generateMemoryTools = (userId: string | number) => {
filter: { filter: {
must: [ must: [
{ {
key: 'userId', key: "userId",
match: { value: String(userId) } match: { value: String(userId) },
} },
] ],
} },
}) });
return results.map((r: any) => r.payload?.fact || '') return results.map((r: any) => r.payload?.fact || "");
} catch (error) { } catch (error) {
console.error("Qdrant search error:", error) console.error("Qdrant search error:", error);
return [] return [];
} }
} },
}) }),
} };
} };

View File

@@ -1,30 +1,44 @@
import { tool } from 'ai' import { tool } from "ai";
import { z } from 'zod' import { z } from "zod";
import type { Payload, PayloadRequest, User } from 'payload' import type { Payload, PayloadRequest, User } from "payload";
export const generatePayloadLocalTools = ( export const generatePayloadLocalTools = (
payload: Payload, payload: Payload,
req: PayloadRequest, req: PayloadRequest,
allowedCollections: string[] allowedCollections: string[],
) => { ) => {
const tools: Record<string, any> = {} const tools: Record<string, any> = {};
for (const collectionSlug of allowedCollections) { for (const collectionSlug of allowedCollections) {
const slugKey = collectionSlug.replace(/-/g, '_') const slugKey = collectionSlug.replace(/-/g, "_");
// 1. Read (Find) Tool // 1. Read (Find) Tool
tools[`read_${slugKey}`] = tool({ tools[`read_${slugKey}`] = tool({
description: `Read/Find documents from the Payload CMS collection: ${collectionSlug}`, description: `Read/Find documents from the Payload CMS collection: ${collectionSlug}`,
parameters: z.object({ parameters: z.object({
limit: z.number().optional().describe('Number of documents to return, max 100.'), limit: z
page: z.number().optional().describe('Page number for pagination.'), .number()
.optional()
.describe("Number of documents to return, max 100."),
page: z.number().optional().describe("Page number for pagination."),
// Simple string-based query for demo purposes. For a robust implementation, // Simple string-based query for demo purposes. For a robust implementation,
// we'd map this to Payload's where query logic using a structured Zod schema. // we'd map this to Payload's where query logic using a structured Zod schema.
query: z.string().optional().describe('Optional text to search within the collection.'), query: z
.string()
.optional()
.describe("Optional text to search within the collection."),
}), }),
// @ts-ignore - AI SDK strict mode type inference bug // @ts-expect-error - AI SDK strict mode type inference bug
execute: async ({ limit = 10, page = 1, query }: { limit?: number; page?: number; query?: string }) => { execute: async ({
const where = query ? { id: { equals: query } } : undefined // Placeholder logic limit = 10,
page = 1,
query,
}: {
limit?: number;
page?: number;
query?: string;
}) => {
const where = query ? { id: { equals: query } } : undefined; // Placeholder logic
return await payload.find({ return await payload.find({
collection: collectionSlug as any, collection: collectionSlug as any,
@@ -32,76 +46,92 @@ export const generatePayloadLocalTools = (
page, page,
where, where,
req, // Crucial for passing the user context and respecting access control! req, // Crucial for passing the user context and respecting access control!
}) });
}, },
}) });
// 2. Read by ID Tool // 2. Read by ID Tool
tools[`read_${slugKey}_by_id`] = tool({ tools[`read_${slugKey}_by_id`] = tool({
description: `Get a specific document by its ID from the ${collectionSlug} collection.`, description: `Get a specific document by its ID from the ${collectionSlug} collection.`,
parameters: z.object({ parameters: z.object({
id: z.union([z.string(), z.number()]).describe('The ID of the document.'), id: z
.union([z.string(), z.number()])
.describe("The ID of the document."),
}), }),
// @ts-ignore - AI SDK strict mode type inference bug // @ts-expect-error - AI SDK strict mode type inference bug
execute: async ({ id }: { id: string | number }) => { execute: async ({ id }: { id: string | number }) => {
return await payload.findByID({ return await payload.findByID({
collection: collectionSlug as any, collection: collectionSlug as any,
id, id,
req, // Enforce access control req, // Enforce access control
}) });
}, },
}) });
// 3. Create Tool // 3. Create Tool
tools[`create_${slugKey}`] = tool({ tools[`create_${slugKey}`] = tool({
description: `Create a new document in the ${collectionSlug} collection.`, description: `Create a new document in the ${collectionSlug} collection.`,
parameters: z.object({ parameters: z.object({
data: z.record(z.any()).describe('A JSON object containing the data to insert.'), data: z
.record(z.any())
.describe("A JSON object containing the data to insert."),
}), }),
// @ts-ignore - AI SDK strict mode type inference bug // @ts-expect-error - AI SDK strict mode type inference bug
execute: async ({ data }: { data: Record<string, any> }) => { execute: async ({ data }: { data: Record<string, any> }) => {
return await payload.create({ return await payload.create({
collection: collectionSlug as any, collection: collectionSlug as any,
data, data,
req, // Enforce access control req, // Enforce access control
}) });
}, },
}) });
// 4. Update Tool // 4. Update Tool
tools[`update_${slugKey}`] = tool({ tools[`update_${slugKey}`] = tool({
description: `Update an existing document in the ${collectionSlug} collection.`, description: `Update an existing document in the ${collectionSlug} collection.`,
parameters: z.object({ parameters: z.object({
id: z.union([z.string(), z.number()]).describe('The ID of the document to update.'), id: z
data: z.record(z.any()).describe('A JSON object containing the fields to update.'), .union([z.string(), z.number()])
.describe("The ID of the document to update."),
data: z
.record(z.any())
.describe("A JSON object containing the fields to update."),
}), }),
// @ts-ignore - AI SDK strict mode type inference bug // @ts-expect-error - AI SDK strict mode type inference bug
execute: async ({ id, data }: { id: string | number; data: Record<string, any> }) => { execute: async ({
id,
data,
}: {
id: string | number;
data: Record<string, any>;
}) => {
return await payload.update({ return await payload.update({
collection: collectionSlug as any, collection: collectionSlug as any,
id, id,
data, data,
req, // Enforce access control req, // Enforce access control
}) });
}, },
}) });
// 5. Delete Tool // 5. Delete Tool
tools[`delete_${slugKey}`] = tool({ tools[`delete_${slugKey}`] = tool({
description: `Delete a document from the ${collectionSlug} collection by ID.`, description: `Delete a document from the ${collectionSlug} collection by ID.`,
parameters: z.object({ parameters: z.object({
id: z.union([z.string(), z.number()]).describe('The ID of the document to delete.'), id: z
.union([z.string(), z.number()])
.describe("The ID of the document to delete."),
}), }),
// @ts-ignore - AI SDK strict mode type inference bug // @ts-expect-error - AI SDK strict mode type inference bug
execute: async ({ id }: { id: string | number }) => { execute: async ({ id }: { id: string | number }) => {
return await payload.delete({ return await payload.delete({
collection: collectionSlug as any, collection: collectionSlug as any,
id, id,
req, // Enforce access control req, // Enforce access control
}) });
}, },
}) });
} }
return tools return tools;
} };

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/pdf", "name": "@mintel/pdf",
"version": "1.9.10", "version": "1.9.17",
"type": "module", "type": "module",
"main": "dist/index.js", "main": "dist/index.js",
"module": "dist/index.js", "module": "dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/seo-engine", "name": "@mintel/seo-engine",
"version": "1.9.10", "version": "1.9.17",
"description": "AI-powered SEO keyword and topic cluster evaluation engine", "description": "AI-powered SEO keyword and topic cluster evaluation engine",
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -0,0 +1,15 @@
FROM node:20-bookworm-slim AS builder
WORKDIR /app
COPY package.json ./
RUN corepack enable pnpm && pnpm install --ignore-workspace
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:20-bookworm-slim
WORKDIR /app
COPY --from=builder /app/package.json ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
ENTRYPOINT ["node", "dist/start.js"]

View File

@@ -0,0 +1,24 @@
{
"name": "@mintel/serpbear-mcp",
"version": "1.9.17",
"description": "SerpBear SEO Tracking MCP server for Mintel infrastructure",
"main": "dist/index.js",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/start.js",
"dev": "tsx watch src/index.ts"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.5.0",
"axios": "^1.7.2",
"dotenv": "^17.3.1",
"express": "^5.2.1"
},
"devDependencies": {
"@types/express": "^5.0.6",
"@types/node": "^20.14.10",
"tsx": "^4.19.2",
"typescript": "^5.5.3"
}
}

View File

@@ -0,0 +1,261 @@
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from 'express';
import crypto from 'crypto';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
Tool,
} from "@modelcontextprotocol/sdk/types.js";
import axios from "axios";
import https from "https";
const SERPBEAR_BASE_URL = process.env.SERPBEAR_BASE_URL || "https://serpbear.infra.mintel.me";
const SERPBEAR_API_KEY = process.env.SERPBEAR_API_KEY;
if (!SERPBEAR_API_KEY) {
console.error("Warning: SERPBEAR_API_KEY is not set. API calls will fail.");
}
const serpbearClient = axios.create({
baseURL: `${SERPBEAR_BASE_URL}/api`,
headers: { apiKey: SERPBEAR_API_KEY },
httpsAgent: new https.Agent({
rejectUnauthorized: false,
}),
});
// --- Tool Definitions ---
const LIST_DOMAINS_TOOL: Tool = {
name: "serpbear_list_domains",
description: "List all domains/projects tracked in SerpBear",
inputSchema: { type: "object", properties: {} },
};
const GET_KEYWORDS_TOOL: Tool = {
name: "serpbear_get_keywords",
description: "Get all tracked keywords for a domain, with their current ranking positions",
inputSchema: {
type: "object",
properties: {
domain_id: { type: "string", description: "Domain ID from serpbear_list_domains" },
},
required: ["domain_id"],
},
};
const ADD_KEYWORDS_TOOL: Tool = {
name: "serpbear_add_keywords",
description: "Add new keywords to track for a domain",
inputSchema: {
type: "object",
properties: {
domain_id: { type: "string", description: "Domain ID" },
keywords: {
type: "array",
items: { type: "string" },
description: "List of keywords to add (e.g., ['Webentwickler Frankfurt', 'Next.js Agentur'])"
},
country: { type: "string", description: "Country code for SERP tracking (e.g., 'de', 'us'). Default: 'de'" },
device: { type: "string", description: "Device type: 'desktop' or 'mobile'. Default: 'desktop'" },
},
required: ["domain_id", "keywords"],
},
};
const DELETE_KEYWORDS_TOOL: Tool = {
name: "serpbear_delete_keywords",
description: "Remove keywords from tracking",
inputSchema: {
type: "object",
properties: {
keyword_ids: {
type: "array",
items: { type: "number" },
description: "Array of keyword IDs to delete"
},
},
required: ["keyword_ids"],
},
};
const REFRESH_KEYWORDS_TOOL: Tool = {
name: "serpbear_refresh_keywords",
description: "Trigger an immediate SERP position refresh for specific keywords",
inputSchema: {
type: "object",
properties: {
keyword_ids: {
type: "array",
items: { type: "number" },
description: "List of keyword IDs to refresh"
},
},
required: ["keyword_ids"],
},
};
const GET_KEYWORD_HISTORY_TOOL: Tool = {
name: "serpbear_get_keyword_history",
description: "Get the ranking history for a specific keyword over time",
inputSchema: {
type: "object",
properties: {
keyword_id: { type: "number", description: "Keyword ID from serpbear_get_keywords" },
},
required: ["keyword_id"],
},
};
// --- Server Setup ---
const server = new Server(
{ name: "serpbear-mcp", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
LIST_DOMAINS_TOOL,
GET_KEYWORDS_TOOL,
ADD_KEYWORDS_TOOL,
DELETE_KEYWORDS_TOOL,
REFRESH_KEYWORDS_TOOL,
GET_KEYWORD_HISTORY_TOOL,
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "serpbear_list_domains") {
try {
const res = await serpbearClient.get('/domains');
const domains = (res.data.domains || []).map((d: any) => ({
id: d.id, domain: d.domain, keywords: d.keywordCount
}));
return { content: [{ type: "text", text: JSON.stringify(domains, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "serpbear_get_keywords") {
const { domain_id } = request.params.arguments as any;
try {
const res = await serpbearClient.get('/keywords', { params: { domain: domain_id } });
const keywords = (res.data.keywords || []).map((k: any) => ({
id: k.id,
keyword: k.keyword,
position: k.position,
lastUpdated: k.lastUpdated,
country: k.country,
device: k.device,
change: k.position_change ?? null,
}));
return { content: [{ type: "text", text: JSON.stringify(keywords, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "serpbear_add_keywords") {
const { domain_id, keywords, country = 'de', device = 'desktop' } = request.params.arguments as any;
try {
const res = await serpbearClient.post('/keywords', {
domain: domain_id,
keywords: keywords.map((kw: string) => ({ keyword: kw, country, device })),
});
return { content: [{ type: "text", text: `Added ${keywords.length} keywords. Result: ${JSON.stringify(res.data)}` }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "serpbear_delete_keywords") {
const { keyword_ids } = request.params.arguments as any;
try {
await serpbearClient.delete('/keywords', { data: { ids: keyword_ids } });
return { content: [{ type: "text", text: `Deleted ${keyword_ids.length} keywords.` }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "serpbear_refresh_keywords") {
const { keyword_ids } = request.params.arguments as any;
try {
await serpbearClient.post('/keywords/refresh', { ids: keyword_ids });
return { content: [{ type: "text", text: `Triggered refresh for ${keyword_ids.length} keywords.` }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "serpbear_get_keyword_history") {
const { keyword_id } = request.params.arguments as any;
try {
const res = await serpbearClient.get(`/keywords/${keyword_id}/history`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
// --- Express / SSE Server ---
async function run() {
const isStdio = process.argv.includes('--stdio');
if (isStdio) {
const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('SerpBear MCP server is running on stdio');
} else {
const app = express();
const transports = new Map<string, SSEServerTransport>();
app.use((req, _res, next) => {
console.error(`${req.method} ${req.url}`);
next();
});
app.get('/sse', async (req, res) => {
const sessionId = crypto.randomUUID();
console.error(`New SSE connection: ${sessionId}`);
const transport = new SSEServerTransport(`/message/${sessionId}`, res);
transports.set(sessionId, transport);
req.on('close', () => {
console.error(`SSE connection closed: ${sessionId}`);
transports.delete(sessionId);
});
await server.connect(transport);
});
app.post('/message/:sessionId', async (req, res) => {
const { sessionId } = req.params;
const transport = transports.get(sessionId as string);
if (!transport) {
console.error(`No transport found for session: ${sessionId}`);
res.status(400).send('No active SSE connection for this session');
return;
}
await transport.handlePostMessage(req, res);
});
const PORT = process.env.SERPBEAR_MCP_PORT || 3004;
app.listen(PORT, () => {
console.error(`SerpBear MCP server running on http://localhost:${PORT}/sse`);
});
}
}
run().catch((err) => {
console.error("Fatal error:", err);
process.exit(1);
});

View File

@@ -0,0 +1,13 @@
import { config } from 'dotenv';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
config({ quiet: true, path: resolve(__dirname, '../../../.env.local') });
config({ quiet: true, path: resolve(__dirname, '../../../.env') });
import('./index.js').catch(err => {
console.error('Failed to start SerpBear MCP Server:', err);
process.exit(1);
});

View File

@@ -0,0 +1,16 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [
"src/**/*"
]
}

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/thumbnail-generator", "name": "@mintel/thumbnail-generator",
"version": "1.9.10", "version": "1.9.17",
"private": false, "private": false,
"type": "module", "type": "module",
"main": "./dist/index.js", "main": "./dist/index.js",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@mintel/tsconfig", "name": "@mintel/tsconfig",
"version": "1.9.10", "version": "1.9.17",
"publishConfig": { "publishConfig": {
"access": "public", "access": "public",
"registry": "https://git.infra.mintel.me/api/packages/mmintel/npm" "registry": "https://git.infra.mintel.me/api/packages/mmintel/npm"

View File

@@ -0,0 +1,15 @@
FROM node:20-bookworm-slim AS builder
WORKDIR /app
COPY package.json ./
RUN corepack enable pnpm && pnpm install --ignore-workspace
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:20-bookworm-slim
WORKDIR /app
COPY --from=builder /app/package.json ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
ENTRYPOINT ["node", "dist/start.js"]

View File

@@ -0,0 +1,24 @@
{
"name": "@mintel/umami-mcp",
"version": "1.9.17",
"description": "Umami Analytics MCP server for Mintel infrastructure",
"main": "dist/index.js",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/start.js",
"dev": "tsx watch src/index.ts"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.5.0",
"axios": "^1.7.2",
"dotenv": "^17.3.1",
"express": "^5.2.1"
},
"devDependencies": {
"@types/express": "^5.0.6",
"@types/node": "^20.14.10",
"tsx": "^4.19.2",
"typescript": "^5.5.3"
}
}

View File

@@ -0,0 +1,298 @@
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from 'express';
import crypto from 'crypto';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
Tool,
} from "@modelcontextprotocol/sdk/types.js";
import axios from "axios";
import https from "https";
const UMAMI_BASE_URL = process.env.UMAMI_BASE_URL || "https://umami.infra.mintel.me";
const UMAMI_USERNAME = process.env.UMAMI_USERNAME;
const UMAMI_PASSWORD = process.env.UMAMI_PASSWORD;
const UMAMI_API_KEY = process.env.UMAMI_API_KEY; // optional if using API key auth
const httpsAgent = new https.Agent({
rejectUnauthorized: false,
});
if (!UMAMI_USERNAME && !UMAMI_API_KEY) {
console.error("Warning: Neither UMAMI_USERNAME/PASSWORD nor UMAMI_API_KEY is set.");
}
// Token cache to avoid logging in on every request
let cachedToken: string | null = null;
async function getAuthHeaders(): Promise<Record<string, string>> {
if (UMAMI_API_KEY) {
return { 'x-umami-api-key': UMAMI_API_KEY };
}
if (!cachedToken) {
const res = await axios.post(`${UMAMI_BASE_URL}/api/auth/login`, {
username: UMAMI_USERNAME,
password: UMAMI_PASSWORD,
}, { httpsAgent });
cachedToken = res.data.token;
}
return { Authorization: `Bearer ${cachedToken}` };
}
// --- Tool Definitions ---
const LIST_WEBSITES_TOOL: Tool = {
name: "umami_list_websites",
description: "List all websites tracked in Umami",
inputSchema: { type: "object", properties: {} },
};
const GET_WEBSITE_STATS_TOOL: Tool = {
name: "umami_get_website_stats",
description: "Get summary statistics for a website for a time range",
inputSchema: {
type: "object",
properties: {
website_id: { type: "string", description: "Umami website UUID" },
start_at: { type: "number", description: "Start timestamp in ms (e.g., Date.now() - 7 days)" },
end_at: { type: "number", description: "End timestamp in ms (default: now)" },
},
required: ["website_id", "start_at"],
},
};
const GET_PAGE_VIEWS_TOOL: Tool = {
name: "umami_get_pageviews",
description: "Get pageview/session time series for a website",
inputSchema: {
type: "object",
properties: {
website_id: { type: "string", description: "Umami website UUID" },
start_at: { type: "number", description: "Start timestamp in ms" },
end_at: { type: "number", description: "End timestamp in ms (default: now)" },
unit: { type: "string", description: "Time unit: 'hour', 'day', 'month' (default: day)" },
timezone: { type: "string", description: "Timezone (default: Europe/Berlin)" },
},
required: ["website_id", "start_at"],
},
};
const GET_TOP_PAGES_TOOL: Tool = {
name: "umami_get_top_pages",
description: "Get the most visited pages/URLs for a website",
inputSchema: {
type: "object",
properties: {
website_id: { type: "string", description: "Umami website UUID" },
start_at: { type: "number", description: "Start timestamp in ms" },
end_at: { type: "number", description: "End timestamp in ms" },
limit: { type: "number", description: "Number of results (default: 20)" },
},
required: ["website_id", "start_at"],
},
};
const GET_TOP_REFERRERS_TOOL: Tool = {
name: "umami_get_top_referrers",
description: "Get the top traffic referrers for a website",
inputSchema: {
type: "object",
properties: {
website_id: { type: "string", description: "Umami website UUID" },
start_at: { type: "number", description: "Start timestamp in ms" },
end_at: { type: "number", description: "End timestamp in ms" },
limit: { type: "number", description: "Number of results (default: 10)" },
},
required: ["website_id", "start_at"],
},
};
const GET_COUNTRY_STATS_TOOL: Tool = {
name: "umami_get_country_stats",
description: "Get visitor breakdown by country",
inputSchema: {
type: "object",
properties: {
website_id: { type: "string", description: "Umami website UUID" },
start_at: { type: "number", description: "Start timestamp in ms" },
end_at: { type: "number", description: "End timestamp in ms" },
},
required: ["website_id", "start_at"],
},
};
const GET_ACTIVE_VISITORS_TOOL: Tool = {
name: "umami_get_active_visitors",
description: "Get the number of visitors currently active on a website (last 5 minutes)",
inputSchema: {
type: "object",
properties: {
website_id: { type: "string", description: "Umami website UUID" },
},
required: ["website_id"],
},
};
// --- Server Setup ---
const server = new Server(
{ name: "umami-mcp", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
LIST_WEBSITES_TOOL,
GET_WEBSITE_STATS_TOOL,
GET_PAGE_VIEWS_TOOL,
GET_TOP_PAGES_TOOL,
GET_TOP_REFERRERS_TOOL,
GET_COUNTRY_STATS_TOOL,
GET_ACTIVE_VISITORS_TOOL,
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const headers = await getAuthHeaders();
const api = axios.create({ baseURL: `${UMAMI_BASE_URL}/api`, headers, httpsAgent });
const now = Date.now();
if (request.params.name === "umami_list_websites") {
try {
const res = await api.get('/websites');
const sites = (res.data.data || res.data || []).map((s: any) => ({
id: s.id, name: s.name, domain: s.domain
}));
return { content: [{ type: "text", text: JSON.stringify(sites, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "umami_get_website_stats") {
const { website_id, start_at, end_at = now } = request.params.arguments as any;
try {
const res = await api.get(`/websites/${website_id}/stats`, { params: { startAt: start_at, endAt: end_at } });
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "umami_get_pageviews") {
const { website_id, start_at, end_at = now, unit = 'day', timezone = 'Europe/Berlin' } = request.params.arguments as any;
try {
const res = await api.get(`/websites/${website_id}/pageviews`, {
params: { startAt: start_at, endAt: end_at, unit, timezone }
});
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "umami_get_top_pages") {
const { website_id, start_at, end_at = now, limit = 20 } = request.params.arguments as any;
try {
const res = await api.get(`/websites/${website_id}/metrics`, {
params: { startAt: start_at, endAt: end_at, type: 'url', limit }
});
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "umami_get_top_referrers") {
const { website_id, start_at, end_at = now, limit = 10 } = request.params.arguments as any;
try {
const res = await api.get(`/websites/${website_id}/metrics`, {
params: { startAt: start_at, endAt: end_at, type: 'referrer', limit }
});
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "umami_get_country_stats") {
const { website_id, start_at, end_at = now } = request.params.arguments as any;
try {
const res = await api.get(`/websites/${website_id}/metrics`, {
params: { startAt: start_at, endAt: end_at, type: 'country' }
});
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
if (request.params.name === "umami_get_active_visitors") {
const { website_id } = request.params.arguments as any;
try {
const res = await api.get(`/websites/${website_id}/active`);
return { content: [{ type: "text", text: JSON.stringify(res.data, null, 2) }] };
} catch (e: any) {
return { isError: true, content: [{ type: "text", text: `Error: ${e.message}` }] };
}
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
// --- Express / SSE Server ---
async function run() {
const isStdio = process.argv.includes('--stdio');
if (isStdio) {
const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('Umami MCP server is running on stdio');
} else {
const app = express();
const transports = new Map<string, SSEServerTransport>();
app.use((req, _res, next) => {
console.error(`${req.method} ${req.url}`);
next();
});
app.get('/sse', async (req, res) => {
const sessionId = crypto.randomUUID();
console.error(`New SSE connection: ${sessionId}`);
const transport = new SSEServerTransport(`/message/${sessionId}`, res);
transports.set(sessionId, transport);
req.on('close', () => {
console.error(`SSE connection closed: ${sessionId}`);
transports.delete(sessionId);
});
await server.connect(transport);
});
app.post('/message/:sessionId', async (req, res) => {
const { sessionId } = req.params;
const transport = transports.get(sessionId as string);
if (!transport) {
console.error(`No transport found for session: ${sessionId}`);
res.status(400).send('No active SSE connection for this session');
return;
}
await transport.handlePostMessage(req, res);
});
const PORT = process.env.UMAMI_MCP_PORT || 3003;
app.listen(PORT, () => {
console.error(`Umami MCP server running on http://localhost:${PORT}/sse`);
});
}
}
run().catch((err) => {
console.error("Fatal error:", err);
process.exit(1);
});

View File

@@ -0,0 +1,13 @@
import { config } from 'dotenv';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
config({ quiet: true, path: resolve(__dirname, '../../../.env.local') });
config({ quiet: true, path: resolve(__dirname, '../../../.env') });
import('./index.js').catch(err => {
console.error('Failed to start Umami MCP Server:', err);
process.exit(1);
});

View File

@@ -0,0 +1,16 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [
"src/**/*"
]
}

917
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff