INIT(app): initial setup

- Initialize project structure
- Add base application files
This commit is contained in:
2025-12-23 22:31:45 +09:00
commit 346b0c79ef
16 changed files with 1006 additions and 0 deletions

58
.gitignore vendored Normal file
View File

@@ -0,0 +1,58 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
venv/
env/
ENV/
.venv
pip-log.txt
pip-delete-this-directory.txt
.pytest_cache/
.coverage
htmlcov/
dist/
build/
*.egg-info/
# Environment variables
.env
.env.local
.env.*.local
*.env
# Chainlit
.chainlit/
.files/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
# Logs
*.log
logs/
# Docker
docker-compose.override.yml
# Kubernetes secrets (local only)
*-secret.yaml
!deploy/vault/*.yaml
# Node modules (if any frontend added)
node_modules/
package-lock.json
yarn.lock
# Temporary files
tmp/
temp/
*.tmp

261
README.md Normal file
View File

@@ -0,0 +1,261 @@
# MAS (Multi-Agent System)
MAS is a unified UI and orchestration layer for multiple AI agents (similar to ChatGPT, Claude, Gemini), running on your own Kubernetes cluster.
## 🎯 Architecture
### Agents
- **Claude Code (Orchestrator)**: overall coordinator & DevOps expert
- **Qwen Backend**: backend engineer (FastAPI, Node.js)
- **Qwen Frontend**: frontend engineer (Next.js, React)
- **Qwen SRE**: monitoring & reliability engineer
### Tech stack
- **Backend**: LangGraph + LangChain + FastAPI
- **UI**: Chainlit (chat-style UI)
- **Database**: PostgreSQL (CNPG)
- **Cache**: Redis
- **LLMs**: Claude API + **Groq Llama 3.x** (OpenAI-compatible API)
- **Deploy**: Kubernetes + ArgoCD
---
## 🚀 Local development
### 1. Run with Docker Compose
```bash
cd deploy/docker
# Copy or create .env and fill in your API keys
# (ANTHROPIC_API_KEY, GROQ_API_KEY, etc.)
# Start the full stack
docker compose up -d
# Tail logs
docker compose logs -f mas
```
Open: `http://localhost:8000`
### 2. Run backend directly (Python)
```bash
cd services/backend
# Create venv
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Environment variables
cp .env.example .env
# Edit .env and set your API keys
# Run Chainlit app
chainlit run chainlit_app.py
```
---
## ☸️ Kubernetes deployment
### 1. Create namespace and secrets
```bash
kubectl create namespace mas
kubectl create secret generic mas-api-keys \
--from-literal=anthropic-api-key=YOUR_CLAUDE_KEY \
--from-literal=openai-api-key=YOUR_OPENAI_KEY \
--from-literal=google-api-key=YOUR_GEMINI_KEY \
-n mas
```
### 2. Deploy via ArgoCD
```bash
# Create ArgoCD Application
kubectl apply -f deploy/argocd/mas.yaml
# Sync and check status
argocd app sync mas
argocd app get mas
```
### 3. Deploy from your server (example)
```bash
# SSH into your k3s master
ssh oracle-master
# Apply ArgoCD Application
sudo kubectl apply -f /path/to/deploy/argocd/mas.yaml
# Check status
sudo kubectl get pods -n mas
sudo kubectl logs -f deployment/mas -n mas
```
Ingress example (if configured): `https://mas.mayne.vcn`
---
## 🎨 UI customization
### Chainlit theme & behavior
You can customize the UI via `services/backend/.chainlit`:
```toml
[UI]
name = "MAS"
show_readme_as_default = true
default_collapse_content = true
```
### Agent prompts
System prompts for each agent live in `services/backend/agents.py`.
You can tune:
- how the **Orchestrator** routes tasks
- coding style of backend/frontend agents
- SRE troubleshooting behavior
---
## 📊 Observability
### Prometheus ServiceMonitor (example)
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: mas
namespace: mas
spec:
selector:
matchLabels:
app: mas
endpoints:
- port: http
path: /metrics
```
### Grafana dashboards
Recommended panels:
- LangGraph workflow metrics
- Per-agent latency & error rate
- Token usage and cost estimates
- Backend API latency & 5xx rate
---
## 🔧 Advanced features
### 1. MCP (Model Context Protocol) with Claude
Using Claude Code as Orchestrator, MAS can access:
- Filesystem (read/write project files)
- Git (status, commit, push, PR)
- SSH (run remote commands on your servers)
- PostgreSQL (schema inspection, migrations, queries)
- Kubernetes (kubectl via MCP tool)
This allows fully automated workflows like:
- “Create a new service, add deployment manifests, and deploy to k3s.”
- “Debug failing pods and propose a fix, then open a PR.”
### 2. Multi-agent collaboration (LangGraph)
Typical workflow:
```text
User request
Claude Orchestrator
↓ decides which agent(s) to call
Backend Dev → Frontend Dev → SRE
Claude Orchestrator (review & summary)
Final answer to user
```
Examples:
- Fullstack feature (API + UI + monitoring)
- Infra rollout (Harbor, Tekton, CNPG, MetalLB) with validation
---
## 📝 Usage examples
### Backend API request
```text
User: "Create a signup API with FastAPI.
Use PostgreSQL and JWT tokens."
🎼 Orchestrator:
→ routes to Qwen Backend
⚙️ Qwen Backend:
→ generates FastAPI router, Pydantic models, DB schema, JWT logic
🎼 Orchestrator:
→ reviews, suggests improvements, and outputs final code snippet & file layout
```
### Frontend component request
```text
User: "Build a responsive dashboard chart component using Recharts."
🎼 Orchestrator:
→ routes to Qwen Frontend
🎨 Qwen Frontend:
→ generates a Next.js/React component with TypeScript and responsive styles
🎼 Orchestrator:
→ explains how to integrate it into your existing app
```
### Infra / SRE request
```text
User: "Prometheus is firing high memory alerts for the PostgreSQL pod.
Help me stabilize it."
🎼 Orchestrator:
→ routes to Qwen SRE
📊 Qwen SRE:
→ analyzes metrics & logs (conceptually),
proposes tuning (Postgres config, indexes, pooler),
and suggests alert threshold adjustments.
```
---
## 🤝 Contributing
Contributions are welcome:
- New agents (e.g., data engineer, security engineer)
- New tools (Harbor, Tekton, CNPG, MetalLB integrations)
- Better prompts and workflows
- Docs and examples
Feel free to open issues or PRs in your Git repository.
---
## 📄 License
MIT

21
deploy/argocd/mas.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mas
namespace: argocd
spec:
project: default
source:
repoURL: https://gitea0213.kro.kr/bluemayne/mas.git
targetRevision: HEAD
path: deploy/k8s
destination:
server: https://kubernetes.default.svc
namespace: mas
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

24
deploy/docker/Dockerfile Normal file
View File

@@ -0,0 +1,24 @@
FROM python:3.11-slim
WORKDIR /app
# 시스템 의존성 설치
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Python 의존성 설치
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# 애플리케이션 코드 복사
COPY . .
# Chainlit 포트
EXPOSE 8000
# Chainlit 실행
CMD ["chainlit", "run", "chainlit_app.py", "--host", "0.0.0.0", "--port", "8000"]

View File

@@ -0,0 +1,73 @@
version: '3.8'
services:
mas:
build: ../../services/backend
container_name: mas
ports:
- "8000:8000"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
# Groq API (OpenAI-compatible)
- GROQ_API_KEY=${GROQ_API_KEY}
- GROQ_API_BASE=${GROQ_API_BASE:-https://api.groq.com/openai/v1}
# (optional) keep other providers
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GOOGLE_API_KEY=${GOOGLE_API_KEY}
- DATABASE_URL=postgresql+asyncpg://mas:mas@postgres:5432/mas
- REDIS_URL=redis://redis:6379/0
depends_on:
- redis
- postgres
- ollama
volumes:
- ../../services/backend:/app
networks:
- mas-network
# Ollama (로컬 Qwen 모델)
ollama:
image: ollama/ollama:latest
container_name: mas-ollama
ports:
- "11434:11434"
volumes:
- ollama-data:/root/.ollama
networks:
- mas-network
# PostgreSQL
postgres:
image: postgres:16-alpine
container_name: mas-postgres
environment:
POSTGRES_DB: mas
POSTGRES_USER: mas
POSTGRES_PASSWORD: mas
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- mas-network
# Redis
redis:
image: redis:7-alpine
container_name: mas-redis
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- mas-network
volumes:
ollama-data:
postgres-data:
redis-data:
networks:
mas-network:
driver: bridge

View File

@@ -0,0 +1,73 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mas
namespace: mas
labels:
app: mas
spec:
replicas: 2
selector:
matchLabels:
app: mas
template:
metadata:
labels:
app: mas
spec:
containers:
- name: mas
image: harbor.mayne.vcn/mas/platform:latest
ports:
- containerPort: 8000
name: http
env:
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: mas-api-keys
key: anthropic-api-key
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: mas-api-keys
key: openai-api-key
- name: GOOGLE_API_KEY
valueFrom:
secretKeyRef:
name: mas-api-keys
key: google-api-key
- name: GROQ_API_KEY
valueFrom:
secretKeyRef:
name: mas-api-keys
key: groq-api-key
- name: GROQ_API_BASE
value: "https://api.groq.com/openai/v1"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: mas-postgres
key: database-url
- name: REDIS_URL
value: "redis://redis:6379/0"
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "2000m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
periodSeconds: 5

29
deploy/k8s/ingress.yaml Normal file
View File

@@ -0,0 +1,29 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mas
namespace: mas
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: "mas"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
ingressClassName: nginx
tls:
- hosts:
- mas.mayne.vcn
secretName: mas-tls
rules:
- host: mas.mayne.vcn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mas
port:
number: 8000

View File

@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mas
resources:
- namespace.yaml
- ../vault/mas-api-keys.yaml
- ../vault/mas-postgres.yaml
- deployment.yaml
- service.yaml
- ingress.yaml

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: Namespace
metadata:
name: mas
labels:
name: mas

17
deploy/k8s/service.yaml Normal file
View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: mas
namespace: mas
labels:
app: mas
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: mas

View File

@@ -0,0 +1,31 @@
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: mas-api-keys
namespace: mas
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: vault-backend
target:
name: mas-api-keys
creationPolicy: Owner
data:
- secretKey: anthropic-api-key
remoteRef:
key: mas/api-keys
property: ANTHROPIC_API_KEY
- secretKey: groq-api-key
remoteRef:
key: mas/api-keys
property: GROQ_API_KEY
- secretKey: openai-api-key
remoteRef:
key: mas/api-keys
property: OPENAI_API_KEY
- secretKey: google-api-key
remoteRef:
key: mas/api-keys
property: GOOGLE_API_KEY

View File

@@ -0,0 +1,27 @@
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: mas-postgres
namespace: mas
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: vault-backend
target:
name: mas-postgres
creationPolicy: Owner
data:
- secretKey: database-url
remoteRef:
key: mas/postgres
property: DATABASE_URL
- secretKey: username
remoteRef:
key: mas/postgres
property: USERNAME
- secretKey: password
remoteRef:
key: mas/postgres
property: PASSWORD

View File

@@ -0,0 +1,17 @@
[project]
enable_telemetry = false
user_env = []
session_timeout = 3600
cache = false
[features]
prompt_playground = true
unsafe_allow_html = true
latex = true
[UI]
name = "MAS Platform"
default_collapse_content = true
default_expand_messages = false
hide_cot = false

240
services/backend/agents.py Normal file
View File

@@ -0,0 +1,240 @@
"""
MAS (Multi-Agent System) 에이전트 정의
"""
from typing import Annotated, Literal, TypedDict
from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_core.messages import HumanMessage, SystemMessage
import os
class AgentState(TypedDict):
"""에이전트 간 공유되는 상태"""
messages: list
current_agent: str
task_type: str
result: dict
# ===== 1. Claude Code - Orchestrator =====
claude_orchestrator = ChatAnthropic(
model="claude-3-5-sonnet-20241022",
api_key=os.getenv("ANTHROPIC_API_KEY"),
temperature=0
)
ORCHESTRATOR_PROMPT = """당신은 MAS의 총괄 조율자이자 DevOps 전문가입니다.
**역할**:
- 사용자 요청을 분석하여 적절한 에이전트에게 작업 할당
- Kubernetes, ArgoCD, Helm, Kustomize 관리
- CI/CD 파이프라인 구성
- 최종 코드 리뷰 및 승인
**사용 가능한 에이전트**:
1. backend_developer: FastAPI, Node.js 백엔드 개발
2. frontend_developer: Next.js, React 프론트엔드 개발
3. sre_specialist: 모니터링, 성능 최적화, 보안
요청을 분석하고 어떤 에이전트가 처리해야 할지 결정하세요.
"""
# ===== 2. Groq #1 - Backend Developer =====
# Groq OpenAI-compatible endpoint
GROQ_API_BASE = os.getenv("GROQ_API_BASE", "https://api.groq.com/openai/v1")
GROQ_API_KEY = os.getenv("GROQ_API_KEY", "")
groq_backend = ChatOpenAI(
model=os.getenv("GROQ_BACKEND_MODEL", "llama-3.3-70b-specdec"),
base_url=GROQ_API_BASE,
api_key=GROQ_API_KEY,
temperature=0.7,
)
BACKEND_PROMPT = """당신은 백엔드 개발 전문가입니다.
**역할**:
- FastAPI, Node.js 백엔드 개발
- REST API 설계 및 구현
- 데이터베이스 쿼리 최적화
- 비즈니스 로직 구현
요청된 백엔드 작업을 수행하고 코드를 생성하세요.
"""
# ===== 3. Groq #2 - Frontend Developer =====
groq_frontend = ChatOpenAI(
model=os.getenv("GROQ_FRONTEND_MODEL", "llama-3.1-8b-instant"),
base_url=GROQ_API_BASE,
api_key=GROQ_API_KEY,
temperature=0.7,
)
FRONTEND_PROMPT = """당신은 프론트엔드 개발 전문가입니다.
**역할**:
- Next.js, React 컴포넌트 개발
- UI/UX 구현
- 상태 관리
- 반응형 디자인
요청된 프론트엔드 작업을 수행하고 코드를 생성하세요.
"""
# ===== 4. Groq #3 - SRE Specialist =====
groq_sre = ChatOpenAI(
model=os.getenv("GROQ_SRE_MODEL", "llama-3.1-8b-instant"),
base_url=GROQ_API_BASE,
api_key=GROQ_API_KEY,
temperature=0.3,
)
SRE_PROMPT = """당신은 SRE(Site Reliability Engineer) 전문가입니다.
**역할**:
- 시스템 모니터링 (Prometheus, Grafana, Loki)
- 로그 분석 및 알람 설정
- 성능 튜닝
- 보안 취약점 점검
요청된 SRE 작업을 수행하고 솔루션을 제시하세요.
"""
def orchestrator_node(state: AgentState) -> AgentState:
"""Claude Code - 작업 분석 및 할당"""
messages = state["messages"]
response = claude_orchestrator.invoke([
SystemMessage(content=ORCHESTRATOR_PROMPT),
HumanMessage(content=messages[-1]["content"])
])
# 작업 타입 결정
content = response.content.lower()
if "backend" in content or "api" in content or "fastapi" in content:
next_agent = "backend_developer"
elif "frontend" in content or "ui" in content or "react" in content:
next_agent = "frontend_developer"
elif "monitoring" in content or "performance" in content or "sre" in content:
next_agent = "sre_specialist"
else:
next_agent = "orchestrator" # 자신이 직접 처리
state["messages"].append({
"role": "orchestrator",
"content": response.content
})
state["current_agent"] = next_agent
return state
def backend_node(state: AgentState) -> AgentState:
"""Groq #1 - 백엔드 개발"""
messages = state["messages"]
response = groq_backend.invoke([
SystemMessage(content=BACKEND_PROMPT),
HumanMessage(content=messages[-1]["content"])
])
state["messages"].append({
"role": "backend_developer",
"content": response.content
})
state["current_agent"] = "orchestrator" # 결과를 오케스트레이터에게 반환
return state
def frontend_node(state: AgentState) -> AgentState:
"""Groq #2 - 프론트엔드 개발"""
messages = state["messages"]
response = groq_frontend.invoke([
SystemMessage(content=FRONTEND_PROMPT),
HumanMessage(content=messages[-1]["content"])
])
state["messages"].append({
"role": "frontend_developer",
"content": response.content
})
state["current_agent"] = "orchestrator"
return state
def sre_node(state: AgentState) -> AgentState:
"""Groq #3 - SRE 작업"""
messages = state["messages"]
response = groq_sre.invoke([
SystemMessage(content=SRE_PROMPT),
HumanMessage(content=messages[-1]["content"])
])
state["messages"].append({
"role": "sre_specialist",
"content": response.content
})
state["current_agent"] = "orchestrator"
return state
def router(state: AgentState) -> Literal["backend_developer", "frontend_developer", "sre_specialist", "end"]:
"""다음 에이전트 라우팅"""
current = state.get("current_agent", "orchestrator")
if current == "backend_developer":
return "backend_developer"
elif current == "frontend_developer":
return "frontend_developer"
elif current == "sre_specialist":
return "sre_specialist"
else:
return "end"
# ===== LangGraph 워크플로우 구성 =====
def create_mas_graph():
"""MAS 워크플로우 그래프 생성"""
workflow = StateGraph(AgentState)
# 노드 추가
workflow.add_node("orchestrator", orchestrator_node)
workflow.add_node("backend_developer", backend_node)
workflow.add_node("frontend_developer", frontend_node)
workflow.add_node("sre_specialist", sre_node)
# 엣지 정의
workflow.set_entry_point("orchestrator")
workflow.add_conditional_edges(
"orchestrator",
router,
{
"backend_developer": "backend_developer",
"frontend_developer": "frontend_developer",
"sre_specialist": "sre_specialist",
"end": END
}
)
# 각 에이전트는 작업 후 orchestrator로 복귀
workflow.add_edge("backend_developer", "orchestrator")
workflow.add_edge("frontend_developer", "orchestrator")
workflow.add_edge("sre_specialist", "orchestrator")
return workflow.compile()
# 그래프 인스턴스 생성
mas_graph = create_mas_graph()

View File

@@ -0,0 +1,85 @@
"""
Chainlit UI for MAS Platform
"""
import chainlit as cl
from agents import mas_graph, AgentState
import os
from dotenv import load_dotenv
load_dotenv()
@cl.on_chat_start
async def start():
"""채팅 시작 시"""
await cl.Message(
content="🤖 **Multi-Agent System**에 오신 것을 환영합니다!\n\n"
"저는 다음 전문가 팀과 함께 작업합니다:\n\n"
"- 🎼 **Claude Code**: 총괄 조율자 & DevOps 전문가\n"
"- ⚙️ **Qwen Backend**: 백엔드 개발자\n"
"- 🎨 **Qwen Frontend**: 프론트엔드 개발자\n"
"- 📊 **Qwen SRE**: 모니터링 & 성능 전문가\n\n"
"무엇을 도와드릴까요?"
).send()
@cl.on_message
async def main(message: cl.Message):
"""메시지 수신 시"""
# 초기 상태
initial_state: AgentState = {
"messages": [{"role": "user", "content": message.content}],
"current_agent": "orchestrator",
"task_type": "",
"result": {}
}
# 응답 메시지 생성
response_msg = cl.Message(content="")
await response_msg.send()
# MAS 그래프 실행
async for event in mas_graph.astream(initial_state):
for node_name, state in event.items():
if node_name != "__end__":
last_message = state["messages"][-1]
agent_name = last_message["role"]
agent_content = last_message["content"]
# 에이전트별 아이콘
agent_icons = {
"orchestrator": "🎼",
"backend_developer": "⚙️",
"frontend_developer": "🎨",
"sre_specialist": "📊"
}
icon = agent_icons.get(agent_name, "🤖")
# 스트리밍 업데이트
response_msg.content += f"\n\n{icon} **{agent_name}**:\n{agent_content}"
await response_msg.update()
# 최종 업데이트
await response_msg.update()
@cl.on_settings_update
async def setup_agent(settings):
"""설정 업데이트"""
print(f"Settings updated: {settings}")
# 사이드바 설정
@cl.author_rename
def rename(orig_author: str):
"""에이전트 이름 매핑"""
rename_dict = {
"orchestrator": "Claude Code (Orchestrator)",
"backend_developer": "Qwen Backend Dev",
"frontend_developer": "Qwen Frontend Dev",
"sre_specialist": "Qwen SRE"
}
return rename_dict.get(orig_author, orig_author)

View File

@@ -0,0 +1,30 @@
# LangGraph & LangChain
langgraph==0.2.53
langchain==0.3.13
langchain-anthropic==0.3.0
langchain-openai==0.2.14
langchain-google-genai==2.0.8
# Chainlit (UI)
chainlit==1.3.1
# API Framework
fastapi==0.115.6
uvicorn[standard]==0.34.0
pydantic==2.10.5
pydantic-settings==2.7.0
# Database
sqlalchemy==2.0.36
asyncpg==0.30.0
psycopg2-binary==2.9.10
# MCP (Model Context Protocol)
mcp==1.1.2
# Utilities
python-dotenv==1.0.1
redis==5.2.1
aioredis==2.0.1
httpx==0.28.1