Compare commits

..

3 Commits

39 changed files with 4661 additions and 152 deletions

View File

@@ -1,10 +1,16 @@
WISECLAW_ENV=development WISECLAW_ENV=development
WISECLAW_DB_URL=sqlite:///./wiseclaw.db WISECLAW_DB_URL=sqlite:///./wiseclaw.db
WISECLAW_OLLAMA_BASE_URL=http://127.0.0.1:11434 WISECLAW_MODEL_PROVIDER=local
WISECLAW_DEFAULT_MODEL=qwen3.5:4b WISECLAW_LOCAL_BASE_URL=http://127.0.0.1:1234
WISECLAW_LOCAL_MODEL=qwen3-vl-8b-instruct-mlx@5bit
WISECLAW_ZAI_BASE_URL=https://api.z.ai/api/anthropic
WISECLAW_ZAI_MODEL=glm-5
WISECLAW_ANYTHINGLLM_BASE_URL=http://127.0.0.1:3001
WISECLAW_ANYTHINGLLM_WORKSPACE_SLUG=wiseclaw
WISECLAW_SEARCH_PROVIDER=brave WISECLAW_SEARCH_PROVIDER=brave
WISECLAW_TELEGRAM_BOT_TOKEN= WISECLAW_TELEGRAM_BOT_TOKEN=
WISECLAW_BRAVE_API_KEY= WISECLAW_BRAVE_API_KEY=
WISECLAW_ZAI_API_KEY=
WISECLAW_ANYTHINGLLM_API_KEY=
WISECLAW_ADMIN_HOST=127.0.0.1 WISECLAW_ADMIN_HOST=127.0.0.1
WISECLAW_ADMIN_PORT=8000 WISECLAW_ADMIN_PORT=8000

10
.gitignore vendored
View File

@@ -13,3 +13,13 @@ build/
.DS_Store .DS_Store
.env .env
wiseclaw.db wiseclaw.db
.codex/
.playwright-cli/
.wiseclaw/
backend/.wiseclaw/
backend/tmp/
backend/second_brain.md
generated_apps/
snake/
snake-game/
Yapilacak_Odevler.md

215
README.md
View File

@@ -1,25 +1,148 @@
# WiseClaw # WiseClaw
WiseClaw is a local-first personal assistant for macOS. It runs a FastAPI backend, uses Ollama for local LLM access, exposes a Telegram bot, and includes a React admin panel for settings, logs, and memory management. 🦉 WiseClaw, macOS üzerinde çalışan yerel-öncelikli bir kişisel asistan altyapısıdır. FastAPI backend, Telegram botu, React admin paneli, çoklu LLM sağlayıcı desteği, tarayıcı otomasyonu, araç çağırma, otomasyonlar ve AnythingLLM tabanlı "ikinci beyin" entegrasyonunu aynı projede bir araya getirir.
## Planned capabilities ## ✨ Neler Yapabiliyor?
- Telegram chat with whitelist support - 🤖 Telegram üzerinden konuşma, komut ve araç kullanımı
- Local Ollama integration for `qwen3.5:4b` - 🧠 `/tanisalim` ile kalıcı kullanıcı profili ve iletişim tercihleri
- Brave or SearXNG-backed web search - 🗂️ AnythingLLM tabanlı ikinci beyin sorguları
- Apple Notes integration via AppleScript - 📝 `/notlarima_ekle` ile second brain notu ekleme ve otomatik senkron
- File read/write tools - ⚙️ `/otomasyon_ekle` ile zamanlanmış görev oluşturma
- Terminal execution with policy modes - 🌐 Brave Search ile web ve görsel arama
- SQLite-backed memory, settings, and audit logs - 🧭 `browser_use` ile gerçek tarayıcıda gezinme
- macOS Keychain for secrets - 🍎 Apple Notes üzerinde not oluşturma
- 📁 Dosya okuma/yazma
- 🖥️ Politika tabanlı terminal komut çalıştırma
- 🔀 Global model sağlayıcı seçimi:
- `Local (LM Studio)`
- `Z.AI`
- 📊 Admin panelden ayarlar, loglar, memory, profiller ve otomasyonları yönetme
## Repository layout ## 🏗️ Mimari
- `backend/` FastAPI app and WiseClaw core modules - `backend/`
- `frontend/` React admin panel FastAPI uygulaması, orchestrator, tool'lar, Telegram botu ve scheduler
- `docs/` architecture and rollout notes - `frontend/`
React tabanlı admin panel
- `docs/`
Mimari notlar ve brainstorm kayıtları
## Local development ## 🧩 LLM Sağlayıcıları
WiseClaw tek bir global sağlayıcı ile çalışır:
- 🏠 `Local (LM Studio)`
Yerel OpenAI-uyumlu endpoint üzerinden çalışır
- ☁️ `Z.AI`
Anthropic-uyumlu API üzerinden `glm-4.7` ve `glm-5` modellerini kullanır
Admin panelden aktif sağlayıcı değiştirildiğinde yeni istekler seçili sağlayıcıya gider.
## 🛠️ Başlıca Tool'lar
- `brave_search`
Web ve image search
- `web_fetch`
Tekil URL çekme ve içerik okuma
- `browser_use`
Gerçek browser otomasyonu
- `apple_notes`
Apple Notes not oluşturma
- `files`
Dosya/dizin erişimi
- `terminal`
Güvenlik politikasıyla komut çalıştırma
- `second_brain`
AnythingLLM workspace context sorgulama
## 🧠 İkinci Beyin Akışı
WiseClaw, AnythingLLM'yi ikinci beyin olarak kullanabilir.
### Sorgulama
Telegram'da örnek:
```text
Notlara bak, serkan ile ne zaman ve nerde buluştum?
```
WiseClaw bu isteği `second_brain` tool'una yönlendirir, AnythingLLM workspace'inden bağlam çeker ve kısa cevap üretir.
### Not Ekleme
Telegram akışı:
```text
/notlarima_ekle
```
Ardından gönderilen not:
1. SQLite veritabanına `second_brain` kaydı olarak yazılır
2. [backend/second_brain.md](/Users/wisecolt-macmini/Project/wiseclaw/backend/second_brain.md) dosyası yeniden üretilir
3. Eski `second_brain.md` AnythingLLM workspace'inden kaldırılır
4. Yeni dosya tekrar upload edilip workspace'e bağlanır
Bu yaklaşım belge tabanlı RAG akışına daha uygun olduğu için doğrudan DB -> AnythingLLM yazmaktan daha sağlamdır.
## 💬 Telegram Komutları
- `/start`
- `/tanisalim`
- `/profilim`
- `/tercihlerim`
- `/tanisalim_sifirla`
- `/otomasyon_ekle`
- `/otomasyonlar`
- `/otomasyon_durdur <id>`
- `/otomasyon_baslat <id>`
- `/otomasyon_sil <id>`
- `/notlarima_ekle`
## ⏱️ Otomasyonlar
WiseClaw backend içinde çalışan scheduler ile zamanlanmış görevleri destekler.
Desteklenen ilk sürüm sıklıkları:
- günlük
- hafta içi
- haftalık
- saatlik
Otomasyon sonuçları:
- Telegram'a gönderilir
- audit log'a yazılır
## 🧪 Admin Panel
Admin panelde şunları yönetebilirsin:
- Runtime settings
- Model provider
- Search provider
- Brave / Z.AI / AnythingLLM secret'ları
- Telegram whitelist
- User Profiles
- Automations
- Memory
- Recent Logs
Önemli endpointler:
- `/admin/dashboard`
- `/admin/settings`
- `/admin/users`
- `/admin/profiles`
- `/admin/automations`
- `/admin/memory`
- `/admin/integrations/llm`
- `/admin/integrations/telegram`
## 🚀 Kurulum
### Backend ### Backend
@@ -39,23 +162,65 @@ npm install
npm run dev npm run dev
``` ```
### Smoke checks ## 🔐 Ortam Değişkenleri
```bash [.env.example](/Users/wisecolt-macmini/Project/wiseclaw/.env.example) dosyasını `.env` olarak kopyalayabilirsin.
cd backend
source .venv312/bin/activate
uvicorn app.main:app --reload
```
Then in another shell: Öne çıkan alanlar:
- `WISECLAW_MODEL_PROVIDER`
- `WISECLAW_LOCAL_BASE_URL`
- `WISECLAW_LOCAL_MODEL`
- `WISECLAW_ZAI_BASE_URL`
- `WISECLAW_ZAI_MODEL`
- `WISECLAW_ANYTHINGLLM_BASE_URL`
- `WISECLAW_ANYTHINGLLM_WORKSPACE_SLUG`
- `WISECLAW_TELEGRAM_BOT_TOKEN`
- `WISECLAW_BRAVE_API_KEY`
- `WISECLAW_ZAI_API_KEY`
- `WISECLAW_ANYTHINGLLM_API_KEY`
Not: secret'lar admin panelden daha sonra da kaydedilebilir.
## ✅ Hızlı Kontrol
Backend ayağa kalktıktan sonra:
```bash ```bash
curl http://127.0.0.1:8000/health curl http://127.0.0.1:8000/health
curl http://127.0.0.1:8000/bootstrap curl http://127.0.0.1:8000/bootstrap
curl http://127.0.0.1:8000/admin/integrations/ollama curl http://127.0.0.1:8000/admin/integrations/llm
curl http://127.0.0.1:8000/admin/integrations/telegram curl http://127.0.0.1:8000/admin/integrations/telegram
``` ```
## Environment bootstrap ## 🔁 Restart
Copy `.env.example` to `.env` and fill in only the values you need for the first boot. Secrets that are changed from the admin panel should be written to Keychain, not back to `.env`. Projede tek komutluk restart script'i var:
```bash
cd /Users/wisecolt-macmini/Project/wiseclaw
zsh ./restart.sh
```
Bu script:
- eski backend sürecini kapatır
- yeni `uvicorn` sürecini başlatır
- log'u `.wiseclaw/logs/backend.log` içine yazar
- health check ile ayağa kalktığını doğrular
## 📌 Notlar
- `LM Studio status: Reachable` görünüp `model is not installed` uyarısı alıyorsan, endpoint açık ama seçili model adı yüklü modellerle birebir eşleşmiyor demektir.
- AnythingLLM tarafında görünen workspace adı ile gerçek `slug` farklı olabilir.
- Brave image search sonuçları Telegram'da medya grubu olarak gönderilebilir.
- Bazı browser görevleri captcha/anti-bot nedeniyle manuel müdahale isteyebilir.
## 🧭 Geliştirme Notu
Bu repo hızlı iterasyonla büyüdüğü için bazı alanlarda bilinçli teknik borçlar bulunur. Ana yön şu anda şudur:
- daha sağlam tool routing
- daha iyi approval akışları
- second brain retrieval kalitesini artırma
- admin panel kullanılabilirliğini geliştirme

View File

@@ -4,7 +4,7 @@ FastAPI service for WiseClaw. The backend now includes:
- SQLite persistence through SQLAlchemy - SQLite persistence through SQLAlchemy
- runtime/admin settings endpoints - runtime/admin settings endpoints
- Ollama integration status endpoint - LM Studio integration status endpoint
- Telegram polling runtime scaffold - Telegram polling runtime scaffold
## Run locally ## Run locally

View File

@@ -3,9 +3,10 @@ from pydantic import BaseModel
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from app.admin.services import AdminService from app.admin.services import AdminService
from app.db import get_session from app.config import get_settings as get_app_settings
from app.db import SecretORM, get_session
from app.llm.ollama_client import OllamaClient from app.llm.ollama_client import OllamaClient
from app.models import MemoryRecord, OllamaStatus, RuntimeSettings, TelegramStatus, UserRecord from app.models import AutomationRecord, MemoryRecord, OllamaStatus, RuntimeSettings, TelegramStatus, UserProfileRecord, UserRecord
router = APIRouter(prefix="/admin", tags=["admin"]) router = APIRouter(prefix="/admin", tags=["admin"])
@@ -25,7 +26,7 @@ def get_dashboard(service: AdminService = Depends(get_admin_service)):
@router.get("/settings", response_model=RuntimeSettings) @router.get("/settings", response_model=RuntimeSettings)
def get_settings(service: AdminService = Depends(get_admin_service)): def get_runtime_settings(service: AdminService = Depends(get_admin_service)):
return service.get_runtime_settings() return service.get_runtime_settings()
@@ -44,6 +45,16 @@ def post_user(payload: UserRecord, service: AdminService = Depends(get_admin_ser
return service.save_user(payload) return service.save_user(payload)
@router.get("/profiles", response_model=list[UserProfileRecord])
def get_profiles(service: AdminService = Depends(get_admin_service)):
return service.list_user_profiles()
@router.get("/automations", response_model=list[AutomationRecord])
def get_automations(service: AdminService = Depends(get_admin_service)):
return service.list_automations()
@router.get("/memory", response_model=list[MemoryRecord]) @router.get("/memory", response_model=list[MemoryRecord])
def get_memory(service: AdminService = Depends(get_admin_service)): def get_memory(service: AdminService = Depends(get_admin_service)):
return service.list_memory() return service.list_memory()
@@ -66,11 +77,18 @@ def post_secret(payload: SecretPayload, service: AdminService = Depends(get_admi
return {"status": "ok"} return {"status": "ok"}
@router.get("/integrations/llm", response_model=OllamaStatus)
@router.get("/integrations/ollama", response_model=OllamaStatus) @router.get("/integrations/ollama", response_model=OllamaStatus)
async def get_ollama_status(service: AdminService = Depends(get_admin_service)): async def get_llm_status(service: AdminService = Depends(get_admin_service)):
runtime = service.get_runtime_settings() runtime = service.get_runtime_settings()
client = OllamaClient(runtime.ollama_base_url) settings = get_app_settings()
return await client.status(runtime.default_model) secret = service.session.get(SecretORM, "zai_api_key") if runtime.model_provider == "zai" else None
client = OllamaClient(
base_url=runtime.local_base_url if runtime.model_provider == "local" else settings.zai_base_url,
provider=runtime.model_provider,
api_key=secret.value if secret else settings.zai_api_key,
)
return await client.status(runtime.local_model if runtime.model_provider == "local" else runtime.zai_model)
@router.get("/integrations/telegram", response_model=TelegramStatus) @router.get("/integrations/telegram", response_model=TelegramStatus)

View File

@@ -5,15 +5,18 @@ from sqlalchemy.orm import Session
from app.db import ( from app.db import (
AuditLogORM, AuditLogORM,
AutomationORM,
AuthorizedUserORM, AuthorizedUserORM,
DEFAULT_TOOLS,
MemoryItemORM, MemoryItemORM,
SecretORM, SecretORM,
SettingORM, SettingORM,
TelegramUserProfileORM,
ToolStateORM, ToolStateORM,
list_recent_logs, list_recent_logs,
) )
from app.config import get_settings from app.config import get_settings
from app.models import DashboardSnapshot, MemoryRecord, RuntimeSettings, TelegramStatus, ToolToggle, UserRecord from app.models import AutomationRecord, DashboardSnapshot, MemoryRecord, RuntimeSettings, TelegramStatus, ToolToggle, UserProfileRecord, UserRecord
class AdminService: class AdminService:
@@ -24,20 +27,30 @@ class AdminService:
settings = { settings = {
item.key: item.value for item in self.session.scalars(select(SettingORM)) item.key: item.value for item in self.session.scalars(select(SettingORM))
} }
tools = list(self.session.scalars(select(ToolStateORM).order_by(ToolStateORM.name.asc()))) tool_records = {
tool.name: tool.enabled for tool in self.session.scalars(select(ToolStateORM).order_by(ToolStateORM.name.asc()))
}
return RuntimeSettings( return RuntimeSettings(
terminal_mode=int(settings["terminal_mode"]), terminal_mode=int(settings["terminal_mode"]),
search_provider=settings["search_provider"], search_provider=settings["search_provider"],
ollama_base_url=settings["ollama_base_url"], model_provider=settings["model_provider"],
default_model=settings["default_model"], local_base_url=settings["local_base_url"],
tools=[ToolToggle(name=tool.name, enabled=tool.enabled) for tool in tools], local_model=settings["local_model"],
zai_model=settings["zai_model"],
anythingllm_base_url=settings["anythingllm_base_url"],
anythingllm_workspace_slug=settings["anythingllm_workspace_slug"],
tools=[ToolToggle(name=name, enabled=tool_records.get(name, enabled)) for name, enabled in DEFAULT_TOOLS.items()],
) )
def update_runtime_settings(self, payload: RuntimeSettings) -> RuntimeSettings: def update_runtime_settings(self, payload: RuntimeSettings) -> RuntimeSettings:
self._save_setting("terminal_mode", str(payload.terminal_mode)) self._save_setting("terminal_mode", str(payload.terminal_mode))
self._save_setting("search_provider", payload.search_provider) self._save_setting("search_provider", payload.search_provider)
self._save_setting("ollama_base_url", payload.ollama_base_url) self._save_setting("model_provider", payload.model_provider)
self._save_setting("default_model", payload.default_model) self._save_setting("local_base_url", payload.local_base_url)
self._save_setting("local_model", payload.local_model)
self._save_setting("zai_model", payload.zai_model)
self._save_setting("anythingllm_base_url", payload.anythingllm_base_url)
self._save_setting("anythingllm_workspace_slug", payload.anythingllm_workspace_slug)
for tool in payload.tools: for tool in payload.tools:
record = self.session.get(ToolStateORM, tool.name) record = self.session.get(ToolStateORM, tool.name)
@@ -92,6 +105,55 @@ class AdminService:
self.session.commit() self.session.commit()
return user return user
def list_user_profiles(self) -> list[UserProfileRecord]:
stmt = select(TelegramUserProfileORM).order_by(TelegramUserProfileORM.updated_at.desc())
profiles: list[UserProfileRecord] = []
for item in self.session.scalars(stmt):
profiles.append(
UserProfileRecord(
telegram_user_id=item.telegram_user_id,
display_name=item.display_name,
bio=item.bio,
occupation=item.occupation,
primary_use_cases=self._decode_list(item.primary_use_cases),
answer_priorities=self._decode_list(item.answer_priorities),
tone_preference=item.tone_preference,
response_length=item.response_length,
language_preference=item.language_preference,
workflow_preference=item.workflow_preference,
interests=self._decode_list(item.interests),
approval_preferences=self._decode_list(item.approval_preferences),
avoid_preferences=item.avoid_preferences,
onboarding_completed=item.onboarding_completed,
last_onboarding_step=item.last_onboarding_step,
)
)
return profiles
def list_automations(self) -> list[AutomationRecord]:
stmt = select(AutomationORM).order_by(AutomationORM.created_at.desc(), AutomationORM.id.desc())
records: list[AutomationRecord] = []
for item in self.session.scalars(stmt):
records.append(
AutomationRecord(
id=item.id,
telegram_user_id=item.telegram_user_id,
name=item.name,
prompt=item.prompt,
schedule_type=item.schedule_type, # type: ignore[arg-type]
interval_hours=item.interval_hours,
time_of_day=item.time_of_day,
days_of_week=self._decode_list(item.days_of_week),
status=item.status, # type: ignore[arg-type]
last_run_at=item.last_run_at,
next_run_at=item.next_run_at,
last_result=item.last_result,
created_at=item.created_at,
updated_at=item.updated_at,
)
)
return records
def list_memory(self) -> list[MemoryRecord]: def list_memory(self) -> list[MemoryRecord]:
stmt = select(MemoryItemORM).order_by(MemoryItemORM.created_at.desc(), MemoryItemORM.id.desc()).limit(50) stmt = select(MemoryItemORM).order_by(MemoryItemORM.created_at.desc(), MemoryItemORM.id.desc()).limit(50)
return [ return [
@@ -140,3 +202,14 @@ class AdminService:
if configured if configured
else "Telegram token is not configured.", else "Telegram token is not configured.",
) )
def _decode_list(self, value: str) -> list[str]:
import json
try:
payload = json.loads(value)
except json.JSONDecodeError:
return []
if not isinstance(payload, list):
return []
return [str(item).strip() for item in payload if str(item).strip()]

View File

@@ -0,0 +1,73 @@
import asyncio
from contextlib import suppress
from typing import Callable
from app.automation.store import AutomationService
from app.db import AutomationORM, session_scope
from app.orchestrator import WiseClawOrchestrator
from app.telegram.bot import TelegramBotService
class AutomationScheduler:
def __init__(self, orchestrator_factory: Callable[[], object], telegram_bot: TelegramBotService) -> None:
self.orchestrator_factory = orchestrator_factory
self.telegram_bot = telegram_bot
self._task: asyncio.Task[None] | None = None
self._running = False
async def start(self) -> None:
if self._task is not None:
return
self._running = True
self._task = asyncio.create_task(self._loop())
async def stop(self) -> None:
self._running = False
if self._task is not None:
self._task.cancel()
with suppress(asyncio.CancelledError):
await self._task
self._task = None
async def _loop(self) -> None:
while self._running:
try:
await self._tick()
except Exception:
pass
await asyncio.sleep(30)
async def _tick(self) -> None:
with session_scope() as session:
service = AutomationService(session)
due_items = service.due_automations()
due_ids = [item.id for item in due_items]
for automation_id in due_ids:
await self._run_automation(automation_id)
async def _run_automation(self, automation_id: int) -> None:
with session_scope() as session:
service = AutomationService(session)
item = session.get(AutomationORM, automation_id)
if item is None or item.status != "active":
return
prompt = item.prompt
user_id = item.telegram_user_id
try:
with self.orchestrator_factory() as session:
orchestrator = WiseClawOrchestrator(session)
result = await orchestrator.handle_text_message(user_id, prompt)
await self.telegram_bot.send_message(user_id, f"⏰ Otomasyon sonucu: {result}")
with session_scope() as session:
service = AutomationService(session)
item = session.get(AutomationORM, automation_id)
if item is not None:
service.mark_run_result(item, result)
except Exception as exc:
with session_scope() as session:
service = AutomationService(session)
item = session.get(AutomationORM, automation_id)
if item is not None:
service.mark_run_error(item, str(exc))

View File

@@ -0,0 +1,455 @@
import json
from datetime import UTC, datetime, timedelta
from zoneinfo import ZoneInfo
from sqlalchemy import select
from sqlalchemy.orm import Session
from app.db import AuditLogORM, AutomationORM, AutomationWizardORM
from app.models import AutomationRecord
LOCAL_TZ = ZoneInfo("Europe/Istanbul")
WEEKDAY_MAP = {
"pzt": 0,
"pazartesi": 0,
"sal": 1,
"sali": 1,
"çar": 2,
"cars": 2,
"çarşamba": 2,
"carsamba": 2,
"per": 3,
"persembe": 3,
"perşembe": 3,
"cum": 4,
"cuma": 4,
"cts": 5,
"cumartesi": 5,
"paz": 6,
"pazar": 6,
}
WEEKDAY_NAMES = ["Pzt", "Sal", "Cars", "Per", "Cum", "Cts", "Paz"]
class AutomationService:
def __init__(self, session: Session) -> None:
self.session = session
def list_automations(self, telegram_user_id: int | None = None) -> list[AutomationRecord]:
stmt = select(AutomationORM).order_by(AutomationORM.created_at.desc(), AutomationORM.id.desc())
if telegram_user_id is not None:
stmt = stmt.where(AutomationORM.telegram_user_id == telegram_user_id)
return [self._to_record(item) for item in self.session.scalars(stmt)]
def start_wizard(self, telegram_user_id: int) -> str:
record = self._get_or_create_wizard(telegram_user_id)
record.step = 0
record.draft_json = "{}"
record.updated_at = datetime.utcnow()
self.session.add(AuditLogORM(category="automation", message=f"automation:wizard-start:{telegram_user_id}"))
self.session.flush()
return (
"Yeni otomasyon olusturalim. Istersen herhangi bir adimda /iptal yazabilirsin.\n\n"
"1/6 Otomasyon adi ne olsun?"
)
def is_wizard_active(self, telegram_user_id: int) -> bool:
wizard = self.session.get(AutomationWizardORM, telegram_user_id)
return wizard is not None and wizard.step < 6
def cancel_wizard(self, telegram_user_id: int) -> str:
wizard = self.session.get(AutomationWizardORM, telegram_user_id)
if wizard is not None:
self.session.delete(wizard)
self.session.add(AuditLogORM(category="automation", message=f"automation:wizard-cancel:{telegram_user_id}"))
self.session.flush()
return "Otomasyon olusturma akisini iptal ettim."
def answer_wizard(self, telegram_user_id: int, text: str) -> tuple[str, bool]:
wizard = self._get_or_create_wizard(telegram_user_id)
draft = self._load_draft(wizard)
cleaned = text.strip()
if wizard.step == 0:
draft["name"] = cleaned
wizard.step = 1
return self._persist_wizard(wizard, draft, "2/6 Bu otomasyon ne yapsin?")
if wizard.step == 1:
draft["prompt"] = cleaned
wizard.step = 2
return self._persist_wizard(
wizard,
draft,
"3/6 Hangi siklikla calissin? Su seceneklerden birini yaz: gunluk, haftaici, haftalik, saatlik",
)
if wizard.step == 2:
schedule_type = self._parse_schedule_type(cleaned)
if schedule_type is None:
return ("Gecerli bir secim gormedim. Lutfen gunluk, haftaici, haftalik veya saatlik yaz.", False)
draft["schedule_type"] = schedule_type
wizard.step = 3
if schedule_type == "hourly":
prompt = "4/6 Kac saatte bir calissin? Ornek: 1, 2, 4, 6"
elif schedule_type == "weekly":
prompt = "4/6 Hangi gunlerde calissin? Ornek: Pzt,Cars,Cum"
else:
prompt = "4/6 Saat kacta calissin? 24 saat formatinda yaz. Ornek: 09:00"
return self._persist_wizard(wizard, draft, prompt)
if wizard.step == 3:
schedule_type = str(draft.get("schedule_type", "daily"))
if schedule_type == "hourly":
interval_hours = self._parse_interval_hours(cleaned)
if interval_hours is None:
return ("Gecerli bir saat araligi gormedim. Lutfen 1 ile 24 arasinda bir sayi yaz.", False)
draft["interval_hours"] = interval_hours
wizard.step = 4
return self._persist_wizard(wizard, draft, "5/6 Aktif olarak kaydedeyim mi? evet/hayir")
if schedule_type == "weekly":
weekdays = self._parse_weekdays(cleaned)
if not weekdays:
return ("Gunleri anlayamadim. Ornek olarak Pzt,Cars,Cum yazabilirsin.", False)
draft["days_of_week"] = weekdays
wizard.step = 4
return self._persist_wizard(wizard, draft, "5/6 Saat kacta calissin? 24 saat formatinda yaz. Ornek: 09:00")
time_of_day = self._parse_time(cleaned)
if time_of_day is None:
return ("Saat formatini anlayamadim. Lutfen 24 saat formatinda HH:MM yaz.", False)
draft["time_of_day"] = time_of_day
wizard.step = 4
return self._persist_wizard(wizard, draft, "5/6 Aktif olarak kaydedeyim mi? evet/hayir")
if wizard.step == 4:
schedule_type = str(draft.get("schedule_type", "daily"))
if schedule_type == "weekly" and "time_of_day" not in draft:
time_of_day = self._parse_time(cleaned)
if time_of_day is None:
return ("Saat formatini anlayamadim. Lutfen 24 saat formatinda HH:MM yaz.", False)
draft["time_of_day"] = time_of_day
wizard.step = 5
summary = self._render_wizard_summary(draft)
return self._persist_wizard(wizard, draft, f"{summary}\n\n6/6 Aktif olarak kaydedeyim mi? evet/hayir")
active = self._parse_yes_no(cleaned)
if active is None:
return ("Lutfen evet veya hayir yaz.", False)
draft["status"] = "active" if active else "paused"
created = self._create_automation(telegram_user_id, draft)
self.session.delete(wizard)
self.session.add(AuditLogORM(category="automation", message=f"automation:created:{created.id}"))
self.session.flush()
return (self._render_created_message(created), True)
if wizard.step == 5:
active = self._parse_yes_no(cleaned)
if active is None:
return ("Lutfen evet veya hayir yaz.", False)
draft["status"] = "active" if active else "paused"
created = self._create_automation(telegram_user_id, draft)
self.session.delete(wizard)
self.session.add(AuditLogORM(category="automation", message=f"automation:created:{created.id}"))
self.session.flush()
return (self._render_created_message(created), True)
return ("Otomasyon wizard durumu gecersiz.", False)
def render_automation_list(self, telegram_user_id: int) -> str:
automations = self.list_automations(telegram_user_id)
if not automations:
return "Henuz otomasyonun yok. /otomasyon_ekle ile baslayabiliriz."
lines = ["Otomasyonlarin:"]
for item in automations:
next_run = self._format_display_time(item.next_run_at)
lines.append(f"- #{item.id} {item.name} [{item.status}] -> siradaki: {next_run}")
return "\n".join(lines)
def pause_automation(self, telegram_user_id: int, automation_id: int) -> str:
item = self._get_owned_automation(telegram_user_id, automation_id)
if item is None:
return "Bu ID ile bir otomasyon bulamadim."
item.status = "paused"
item.updated_at = datetime.utcnow()
self.session.add(AuditLogORM(category="automation", message=f"automation:paused:{item.id}"))
self.session.flush()
return f"Otomasyon durduruldu: #{item.id} {item.name}"
def resume_automation(self, telegram_user_id: int, automation_id: int) -> str:
item = self._get_owned_automation(telegram_user_id, automation_id)
if item is None:
return "Bu ID ile bir otomasyon bulamadim."
item.status = "active"
item.next_run_at = self._compute_next_run(item, from_time=datetime.utcnow())
item.updated_at = datetime.utcnow()
self.session.add(AuditLogORM(category="automation", message=f"automation:resumed:{item.id}"))
self.session.flush()
return f"Otomasyon tekrar aktif edildi: #{item.id} {item.name}"
def delete_automation(self, telegram_user_id: int, automation_id: int) -> str:
item = self._get_owned_automation(telegram_user_id, automation_id)
if item is None:
return "Bu ID ile bir otomasyon bulamadim."
name = item.name
self.session.delete(item)
self.session.add(AuditLogORM(category="automation", message=f"automation:deleted:{automation_id}"))
self.session.flush()
return f"Otomasyon silindi: #{automation_id} {name}"
def due_automations(self, now: datetime | None = None) -> list[AutomationORM]:
current = now or datetime.utcnow()
stmt = (
select(AutomationORM)
.where(AutomationORM.status == "active")
.where(AutomationORM.next_run_at.is_not(None))
.where(AutomationORM.next_run_at <= current)
.order_by(AutomationORM.next_run_at.asc(), AutomationORM.id.asc())
)
return list(self.session.scalars(stmt))
def mark_run_result(self, item: AutomationORM, result: str, ran_at: datetime | None = None) -> None:
run_time = ran_at or datetime.utcnow()
item.last_run_at = run_time
item.last_result = result[:2000]
item.next_run_at = self._compute_next_run(item, from_time=run_time + timedelta(seconds=1))
item.updated_at = datetime.utcnow()
self.session.add(AuditLogORM(category="automation", message=f"automation:ran:{item.id}"))
self.session.flush()
def mark_run_error(self, item: AutomationORM, error: str) -> None:
item.last_result = f"ERROR: {error[:1800]}"
item.next_run_at = self._compute_next_run(item, from_time=datetime.utcnow() + timedelta(minutes=5))
item.updated_at = datetime.utcnow()
self.session.add(AuditLogORM(category="automation", message=f"automation:error:{item.id}:{error[:120]}"))
self.session.flush()
def _persist_wizard(self, wizard: AutomationWizardORM, draft: dict[str, object], reply: str) -> tuple[str, bool]:
wizard.draft_json = json.dumps(draft, ensure_ascii=False)
wizard.updated_at = datetime.utcnow()
self.session.flush()
return reply, False
def _get_or_create_wizard(self, telegram_user_id: int) -> AutomationWizardORM:
wizard = self.session.get(AutomationWizardORM, telegram_user_id)
if wizard is None:
wizard = AutomationWizardORM(
telegram_user_id=telegram_user_id,
step=0,
draft_json="{}",
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
)
self.session.add(wizard)
self.session.flush()
return wizard
def _load_draft(self, wizard: AutomationWizardORM) -> dict[str, object]:
try:
payload = json.loads(wizard.draft_json)
except json.JSONDecodeError:
return {}
return payload if isinstance(payload, dict) else {}
def _parse_schedule_type(self, text: str) -> str | None:
lowered = text.strip().lower()
mapping = {
"gunluk": "daily",
"daily": "daily",
"her gun": "daily",
"haftaici": "weekdays",
"hafta içi": "weekdays",
"weekdays": "weekdays",
"haftalik": "weekly",
"haftalık": "weekly",
"weekly": "weekly",
"saatlik": "hourly",
"hourly": "hourly",
}
return mapping.get(lowered)
def _parse_interval_hours(self, text: str) -> int | None:
try:
value = int(text.strip())
except ValueError:
return None
if 1 <= value <= 24:
return value
return None
def _parse_time(self, text: str) -> str | None:
cleaned = text.strip()
if len(cleaned) != 5 or ":" not in cleaned:
return None
hour_text, minute_text = cleaned.split(":", 1)
try:
hour = int(hour_text)
minute = int(minute_text)
except ValueError:
return None
if not (0 <= hour <= 23 and 0 <= minute <= 59):
return None
return f"{hour:02d}:{minute:02d}"
def _parse_weekdays(self, text: str) -> list[str]:
parts = [part.strip().lower() for part in text.replace("\n", ",").split(",")]
seen: list[int] = []
for part in parts:
day = WEEKDAY_MAP.get(part)
if day is not None and day not in seen:
seen.append(day)
return [WEEKDAY_NAMES[day] for day in sorted(seen)]
def _parse_yes_no(self, text: str) -> bool | None:
lowered = text.strip().lower()
if lowered in {"evet", "e", "yes", "y"}:
return True
if lowered in {"hayir", "hayır", "h", "no", "n"}:
return False
return None
def _render_wizard_summary(self, draft: dict[str, object]) -> str:
schedule_type = str(draft.get("schedule_type", "daily"))
label = {
"daily": "gunluk",
"weekdays": "haftaici",
"weekly": "haftalik",
"hourly": "saatlik",
}.get(schedule_type, schedule_type)
lines = [
"Ozet:",
f"- Ad: {draft.get('name', '-')}",
f"- Gorev: {draft.get('prompt', '-')}",
f"- Siklik: {label}",
]
if schedule_type == "hourly":
lines.append(f"- Aralik: {draft.get('interval_hours', '-')} saat")
else:
lines.append(f"- Saat: {draft.get('time_of_day', '-')}")
if schedule_type == "weekly":
days = draft.get("days_of_week", [])
if isinstance(days, list):
lines.append(f"- Gunler: {', '.join(str(item) for item in days)}")
return "\n".join(lines)
def _render_created_message(self, item: AutomationORM) -> str:
next_run = self._format_display_time(item.next_run_at)
return (
f"Otomasyon kaydedildi: #{item.id} {item.name}\n"
f"- Durum: {item.status}\n"
f"- Siradaki calisma: {next_run}"
)
def _create_automation(self, telegram_user_id: int, draft: dict[str, object]) -> AutomationORM:
schedule_type = str(draft["schedule_type"])
item = AutomationORM(
telegram_user_id=telegram_user_id,
name=str(draft["name"]),
prompt=str(draft["prompt"]),
schedule_type=schedule_type,
interval_hours=int(draft["interval_hours"]) if draft.get("interval_hours") is not None else None,
time_of_day=str(draft["time_of_day"]) if draft.get("time_of_day") is not None else None,
days_of_week=json.dumps(draft.get("days_of_week", []), ensure_ascii=False),
status=str(draft.get("status", "active")),
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
)
if item.status == "active":
item.next_run_at = self._compute_next_run(item, from_time=datetime.utcnow())
self.session.add(item)
self.session.flush()
return item
def _compute_next_run(self, item: AutomationORM, from_time: datetime) -> datetime:
if item.schedule_type == "hourly":
interval = max(item.interval_hours or 1, 1)
return from_time + timedelta(hours=interval)
local_now = from_time.replace(tzinfo=UTC).astimezone(LOCAL_TZ)
hour, minute = self._parse_hour_minute(item.time_of_day or "09:00")
if item.schedule_type == "daily":
return self._to_utc_naive(self._next_local_time(local_now, hour, minute))
if item.schedule_type == "weekdays":
candidate = self._next_local_time(local_now, hour, minute)
while candidate.weekday() >= 5:
candidate = candidate + timedelta(days=1)
candidate = candidate.replace(hour=hour, minute=minute, second=0, microsecond=0)
return self._to_utc_naive(candidate)
days = self._decode_days(item.days_of_week)
if not days:
days = [0]
candidate = self._next_local_time(local_now, hour, minute)
for _ in range(8):
if candidate.weekday() in days:
return self._to_utc_naive(candidate)
candidate = candidate + timedelta(days=1)
candidate = candidate.replace(hour=hour, minute=minute, second=0, microsecond=0)
return self._to_utc_naive(candidate)
def _next_local_time(self, local_now: datetime, hour: int, minute: int) -> datetime:
candidate = local_now.replace(hour=hour, minute=minute, second=0, microsecond=0)
if candidate <= local_now:
candidate = candidate + timedelta(days=1)
return candidate
def _parse_hour_minute(self, value: str) -> tuple[int, int]:
hour_text, minute_text = value.split(":", 1)
return int(hour_text), int(minute_text)
def _decode_days(self, value: str) -> list[int]:
try:
payload = json.loads(value)
except json.JSONDecodeError:
return []
result: list[int] = []
if not isinstance(payload, list):
return result
for item in payload:
label = str(item)
if label in WEEKDAY_NAMES:
result.append(WEEKDAY_NAMES.index(label))
return result
def _to_utc_naive(self, local_dt: datetime) -> datetime:
return local_dt.astimezone(UTC).replace(tzinfo=None)
def _format_display_time(self, value: datetime | None) -> str:
if value is None:
return "hesaplanmadi"
return value.replace(tzinfo=UTC).astimezone(LOCAL_TZ).strftime("%Y-%m-%d %H:%M")
def _to_record(self, item: AutomationORM) -> AutomationRecord:
days = []
try:
payload = json.loads(item.days_of_week)
if isinstance(payload, list):
days = [str(day) for day in payload]
except json.JSONDecodeError:
days = []
return AutomationRecord(
id=item.id,
telegram_user_id=item.telegram_user_id,
name=item.name,
prompt=item.prompt,
schedule_type=item.schedule_type, # type: ignore[arg-type]
interval_hours=item.interval_hours,
time_of_day=item.time_of_day,
days_of_week=days,
status=item.status, # type: ignore[arg-type]
last_run_at=item.last_run_at,
next_run_at=item.next_run_at,
last_result=item.last_result,
created_at=item.created_at,
updated_at=item.updated_at,
)
def _get_owned_automation(self, telegram_user_id: int, automation_id: int) -> AutomationORM | None:
item = self.session.get(AutomationORM, automation_id)
if item is None or item.telegram_user_id != telegram_user_id:
return None
return item

View File

@@ -15,14 +15,20 @@ class Settings(BaseSettings):
db_url: str = "sqlite:///./wiseclaw.db" db_url: str = "sqlite:///./wiseclaw.db"
admin_host: str = "127.0.0.1" admin_host: str = "127.0.0.1"
admin_port: int = 8000 admin_port: int = 8000
ollama_base_url: str = "http://127.0.0.1:11434" model_provider: str = "local"
default_model: str = "qwen3.5:4b" local_base_url: str = "http://127.0.0.1:1234"
local_model: str = "qwen3-vl-8b-instruct-mlx@5bit"
zai_base_url: str = "https://api.z.ai/api/anthropic"
zai_model: str = "glm-5"
anythingllm_base_url: str = "http://127.0.0.1:3001"
anythingllm_workspace_slug: str = "wiseclaw"
search_provider: str = "brave" search_provider: str = "brave"
telegram_bot_token: str = Field(default="", repr=False) telegram_bot_token: str = Field(default="", repr=False)
brave_api_key: str = Field(default="", repr=False) brave_api_key: str = Field(default="", repr=False)
zai_api_key: str = Field(default="", repr=False)
anythingllm_api_key: str = Field(default="", repr=False)
@lru_cache @lru_cache
def get_settings() -> Settings: def get_settings() -> Settings:
return Settings() return Settings()

View File

@@ -8,15 +8,10 @@ from sqlalchemy.orm import DeclarativeBase, Mapped, Session, mapped_column, sess
from app.config import get_settings from app.config import get_settings
DEFAULT_SETTINGS = {
"terminal_mode": "3",
"search_provider": "brave",
"ollama_base_url": "http://127.0.0.1:11434",
"default_model": "qwen3.5:4b",
}
DEFAULT_TOOLS = { DEFAULT_TOOLS = {
"brave_search": True, "brave_search": True,
"second_brain": True,
"browser_use": True,
"searxng_search": False, "searxng_search": False,
"web_fetch": True, "web_fetch": True,
"apple_notes": True, "apple_notes": True,
@@ -82,7 +77,88 @@ class SecretORM(Base):
updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False) updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
class TelegramUserProfileORM(Base):
__tablename__ = "telegram_user_profiles"
telegram_user_id: Mapped[int] = mapped_column(Integer, primary_key=True)
display_name: Mapped[str | None] = mapped_column(String(255))
bio: Mapped[str | None] = mapped_column(Text)
occupation: Mapped[str | None] = mapped_column(String(255))
primary_use_cases: Mapped[str] = mapped_column(Text, nullable=False, default="[]")
answer_priorities: Mapped[str] = mapped_column(Text, nullable=False, default="[]")
tone_preference: Mapped[str | None] = mapped_column(String(100))
response_length: Mapped[str | None] = mapped_column(String(50))
language_preference: Mapped[str | None] = mapped_column(String(100))
workflow_preference: Mapped[str | None] = mapped_column(String(100))
interests: Mapped[str] = mapped_column(Text, nullable=False, default="[]")
approval_preferences: Mapped[str] = mapped_column(Text, nullable=False, default="[]")
avoid_preferences: Mapped[str | None] = mapped_column(Text)
onboarding_completed: Mapped[bool] = mapped_column(Boolean, nullable=False, default=False)
last_onboarding_step: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
class AutomationORM(Base):
__tablename__ = "automations"
id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
telegram_user_id: Mapped[int] = mapped_column(Integer, nullable=False, index=True)
name: Mapped[str] = mapped_column(String(255), nullable=False)
prompt: Mapped[str] = mapped_column(Text, nullable=False)
schedule_type: Mapped[str] = mapped_column(String(50), nullable=False)
interval_hours: Mapped[int | None] = mapped_column(Integer)
time_of_day: Mapped[str | None] = mapped_column(String(20))
days_of_week: Mapped[str] = mapped_column(Text, nullable=False, default="[]")
status: Mapped[str] = mapped_column(String(20), nullable=False, default="active")
last_run_at: Mapped[datetime | None] = mapped_column(DateTime)
next_run_at: Mapped[datetime | None] = mapped_column(DateTime)
last_result: Mapped[str | None] = mapped_column(Text)
created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
class AutomationWizardORM(Base):
__tablename__ = "automation_wizards"
telegram_user_id: Mapped[int] = mapped_column(Integer, primary_key=True)
step: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
draft_json: Mapped[str] = mapped_column(Text, nullable=False, default="{}")
created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
class SecondBrainNoteORM(Base):
__tablename__ = "second_brain_notes"
id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
telegram_user_id: Mapped[int] = mapped_column(Integer, nullable=False, index=True)
content: Mapped[str] = mapped_column(Text, nullable=False)
source: Mapped[str] = mapped_column(String(50), nullable=False, default="telegram")
created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
class SecondBrainCaptureORM(Base):
__tablename__ = "second_brain_captures"
telegram_user_id: Mapped[int] = mapped_column(Integer, primary_key=True)
created_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
updated_at: Mapped[datetime] = mapped_column(DateTime, default=datetime.utcnow, nullable=False)
settings = get_settings() settings = get_settings()
DEFAULT_SETTINGS = {
"terminal_mode": "3",
"search_provider": settings.search_provider,
"model_provider": settings.model_provider,
"local_base_url": settings.local_base_url,
"local_model": settings.local_model,
"zai_model": settings.zai_model,
"anythingllm_base_url": settings.anythingllm_base_url,
"anythingllm_workspace_slug": settings.anythingllm_workspace_slug,
}
engine = create_engine( engine = create_engine(
settings.db_url, settings.db_url,
connect_args={"check_same_thread": False} if settings.db_url.startswith("sqlite") else {}, connect_args={"check_same_thread": False} if settings.db_url.startswith("sqlite") else {},
@@ -130,4 +206,3 @@ def session_scope() -> Iterator[Session]:
def list_recent_logs(session: Session, limit: int = 10) -> list[str]: def list_recent_logs(session: Session, limit: int = 10) -> list[str]:
stmt = select(AuditLogORM).order_by(AuditLogORM.created_at.desc(), AuditLogORM.id.desc()).limit(limit) stmt = select(AuditLogORM).order_by(AuditLogORM.created_at.desc(), AuditLogORM.id.desc()).limit(limit)
return [row.message for row in session.scalars(stmt)] return [row.message for row in session.scalars(stmt)]

View File

@@ -1,37 +1,323 @@
import httpx import asyncio
from httpx import HTTPError import json
from typing import Any
import httpx
from httpx import HTTPError, HTTPStatusError, ReadTimeout
from app.models import ModelProvider, OllamaStatus
from app.models import OllamaStatus
class OllamaClient: class OllamaClient:
def __init__(self, base_url: str) -> None: def __init__(self, base_url: str, provider: ModelProvider = "local", api_key: str = "") -> None:
self.base_url = base_url.rstrip("/") self.base_url = base_url.rstrip("/")
self.provider = provider
self.api_key = api_key
async def health(self) -> bool: async def health(self) -> bool:
async with httpx.AsyncClient(timeout=5.0) as client: try:
response = await client.get(f"{self.base_url}/api/tags") await self._fetch_models()
return response.is_success except HTTPError:
return False
return True
async def status(self, model: str) -> OllamaStatus: async def status(self, model: str) -> OllamaStatus:
if self.provider == "zai" and not self.api_key.strip():
return OllamaStatus(
reachable=False,
provider=self.provider,
base_url=self.base_url,
model=model,
message="Z.AI API key is not configured.",
)
try: try:
async with httpx.AsyncClient(timeout=5.0) as client: installed_models = await self._fetch_models()
response = await client.get(f"{self.base_url}/api/tags")
response.raise_for_status()
except HTTPError as exc: except HTTPError as exc:
return OllamaStatus( return OllamaStatus(
reachable=False, reachable=False,
provider=self.provider,
base_url=self.base_url, base_url=self.base_url,
model=model, model=model,
message=f"Ollama unreachable: {exc}", message=f"LLM endpoint unreachable: {exc}",
) )
payload = response.json()
installed_models = [item.get("name", "") for item in payload.get("models", []) if item.get("name")]
has_model = model in installed_models has_model = model in installed_models
return OllamaStatus( return OllamaStatus(
reachable=True, reachable=True,
provider=self.provider,
base_url=self.base_url, base_url=self.base_url,
model=model, model=model,
installed_models=installed_models, installed_models=installed_models,
message="Model found." if has_model else "Ollama reachable but model is not installed.", message="Model found." if has_model else "LLM endpoint reachable but model is not installed.",
) )
async def chat(self, model: str, system_prompt: str, user_message: str) -> str:
result = await self.chat_completion(
model=model,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message},
],
)
if result["tool_calls"]:
raise HTTPError("Chat completion requested tools in plain chat mode.")
payload = result["content"].strip()
if not payload:
raise HTTPError("Chat completion returned empty content.")
return payload
async def chat_completion(
self,
model: str,
messages: list[dict[str, object]],
tools: list[dict[str, Any]] | None = None,
tool_choice: str | dict[str, Any] | None = None,
) -> dict[str, Any]:
self._ensure_provider_ready()
if self.provider == "zai":
return await self._anthropic_chat_completion(model, messages, tools)
payload: dict[str, Any] = {
"model": model,
"messages": messages,
"temperature": 0.3,
}
if tools:
payload["tools"] = tools
payload["tool_choice"] = tool_choice or "auto"
endpoint = f"{self.base_url}/chat/completions" if self.provider == "zai" else f"{self.base_url}/v1/chat/completions"
try:
async with httpx.AsyncClient(timeout=180.0) as client:
response = await self._post_with_retry(client, endpoint, payload)
except ReadTimeout as exc:
raise HTTPError("LLM request timed out after 180 seconds.") from exc
data = response.json()
choices = data.get("choices", [])
if not choices:
raise HTTPError("Chat completion returned no choices.")
message = choices[0].get("message", {})
content = message.get("content", "")
if isinstance(content, list):
text_parts = [part.get("text", "") for part in content if isinstance(part, dict)]
content = "".join(text_parts)
tool_calls = []
for call in message.get("tool_calls", []) or []:
function = call.get("function", {})
raw_arguments = function.get("arguments", "{}")
try:
arguments = json.loads(raw_arguments) if isinstance(raw_arguments, str) else raw_arguments
except json.JSONDecodeError:
arguments = {"raw": raw_arguments}
tool_calls.append(
{
"id": call.get("id", ""),
"name": function.get("name", ""),
"arguments": arguments,
}
)
return {
"content": str(content or ""),
"tool_calls": tool_calls,
"message": message,
}
async def _anthropic_chat_completion(
self,
model: str,
messages: list[dict[str, object]],
tools: list[dict[str, Any]] | None = None,
) -> dict[str, Any]:
system_prompt, anthropic_messages = self._to_anthropic_messages(messages)
payload: dict[str, Any] = {
"model": model,
"max_tokens": 2048,
"messages": anthropic_messages,
}
if system_prompt:
payload["system"] = system_prompt
anthropic_tools = self._to_anthropic_tools(tools or [])
if anthropic_tools:
payload["tools"] = anthropic_tools
try:
async with httpx.AsyncClient(timeout=180.0) as client:
response = await self._post_with_retry(client, f"{self.base_url}/v1/messages", payload)
except ReadTimeout as exc:
raise HTTPError("LLM request timed out after 180 seconds.") from exc
data = response.json()
blocks = data.get("content", []) or []
text_parts: list[str] = []
tool_calls: list[dict[str, Any]] = []
for block in blocks:
if not isinstance(block, dict):
continue
block_type = block.get("type")
if block_type == "text":
text_parts.append(str(block.get("text", "")))
if block_type == "tool_use":
tool_calls.append(
{
"id": str(block.get("id", "")),
"name": str(block.get("name", "")),
"arguments": block.get("input", {}) if isinstance(block.get("input"), dict) else {},
}
)
return {
"content": "".join(text_parts).strip(),
"tool_calls": tool_calls,
"message": data,
}
async def _fetch_models(self) -> list[str]:
self._ensure_provider_ready()
async with httpx.AsyncClient(timeout=5.0) as client:
if self.provider == "zai":
response = await client.get(f"{self.base_url}/v1/models", headers=self._headers())
if response.is_success:
payload = response.json()
return [item.get("id", "") for item in payload.get("data", []) if item.get("id")]
return ["glm-4.7", "glm-5"]
response = await client.get(f"{self.base_url}/api/tags")
if response.is_success:
payload = response.json()
if isinstance(payload, dict) and "models" in payload:
return [item.get("name", "") for item in payload.get("models", []) if item.get("name")]
response = await client.get(f"{self.base_url}/v1/models")
response.raise_for_status()
payload = response.json()
return [item.get("id", "") for item in payload.get("data", []) if item.get("id")]
def _headers(self) -> dict[str, str]:
if self.provider != "zai":
return {}
return {
"x-api-key": self.api_key,
"anthropic-version": "2023-06-01",
"content-type": "application/json",
}
def _ensure_provider_ready(self) -> None:
if self.provider == "zai" and not self.api_key.strip():
raise HTTPError("Z.AI API key is not configured.")
async def _post_with_retry(
self,
client: httpx.AsyncClient,
endpoint: str,
payload: dict[str, Any],
) -> httpx.Response:
delays = [0.0, 1.5, 4.0]
last_exc: HTTPStatusError | None = None
for attempt, delay in enumerate(delays, start=1):
if delay > 0:
await asyncio.sleep(delay)
response = await client.post(endpoint, json=payload, headers=self._headers())
try:
response.raise_for_status()
return response
except HTTPStatusError as exc:
last_exc = exc
if response.status_code != 429 or attempt == len(delays):
raise self._translate_status_error(exc) from exc
if last_exc is not None:
raise self._translate_status_error(last_exc) from last_exc
raise HTTPError("LLM request failed.")
def _translate_status_error(self, exc: HTTPStatusError) -> HTTPError:
status = exc.response.status_code
if status == 429:
provider = "Z.AI" if self.provider == "zai" else "LLM endpoint"
return HTTPError(f"{provider} rate limit reached. Please wait a bit and try again.")
if status == 401:
provider = "Z.AI" if self.provider == "zai" else "LLM endpoint"
return HTTPError(f"{provider} authentication failed. Check the configured API key.")
if status == 404:
return HTTPError("Configured LLM endpoint path was not found.")
return HTTPError(f"LLM request failed with HTTP {status}.")
def _to_anthropic_tools(self, tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
anthropic_tools: list[dict[str, Any]] = []
for tool in tools:
function = tool.get("function", {}) if isinstance(tool, dict) else {}
if not isinstance(function, dict):
continue
anthropic_tools.append(
{
"name": str(function.get("name", "")),
"description": str(function.get("description", "")),
"input_schema": function.get("parameters", {"type": "object", "properties": {}}),
}
)
return [tool for tool in anthropic_tools if tool["name"]]
def _to_anthropic_messages(self, messages: list[dict[str, object]]) -> tuple[str, list[dict[str, object]]]:
system_parts: list[str] = []
anthropic_messages: list[dict[str, object]] = []
for message in messages:
role = str(message.get("role", "user"))
if role == "system":
content = str(message.get("content", "")).strip()
if content:
system_parts.append(content)
continue
if role == "tool":
content = str(message.get("content", ""))
tool_use_id = str(message.get("tool_call_id", ""))
tool_result_block = {
"type": "tool_result",
"tool_use_id": tool_use_id,
"content": content,
}
if anthropic_messages and anthropic_messages[-1]["role"] == "user":
existing = anthropic_messages[-1]["content"]
if isinstance(existing, list):
existing.append(tool_result_block)
continue
anthropic_messages.append({"role": "user", "content": [tool_result_block]})
continue
content_blocks: list[dict[str, object]] = []
content = message.get("content", "")
if isinstance(content, str) and content.strip():
content_blocks.append({"type": "text", "text": content})
raw_tool_calls = message.get("tool_calls", [])
if isinstance(raw_tool_calls, list):
for call in raw_tool_calls:
if not isinstance(call, dict):
continue
function = call.get("function", {})
if not isinstance(function, dict):
continue
arguments = function.get("arguments", {})
if isinstance(arguments, str):
try:
arguments = json.loads(arguments)
except json.JSONDecodeError:
arguments = {}
content_blocks.append(
{
"type": "tool_use",
"id": str(call.get("id", "")),
"name": str(function.get("name", "")),
"input": arguments if isinstance(arguments, dict) else {},
}
)
if not content_blocks:
continue
anthropic_messages.append({"role": "assistant" if role == "assistant" else "user", "content": content_blocks})
return "\n\n".join(part for part in system_parts if part), anthropic_messages

View File

@@ -1,15 +1,48 @@
from datetime import datetime
from app.models import RuntimeSettings from app.models import RuntimeSettings
def build_prompt_context(message: str, runtime: RuntimeSettings, memory: list[str]) -> dict[str, object]: def build_prompt_context(
message: str,
runtime: RuntimeSettings,
memory: list[str],
workspace_root: str,
profile_preferences: str = "",
second_brain_context: str = "",
) -> dict[str, object]:
tool_names = [tool.name for tool in runtime.tools if tool.enabled]
memory_lines = "\n".join(f"- {item}" for item in memory) if memory else "- No recent memory."
profile_lines = profile_preferences or "- No saved profile preferences."
second_brain_lines = second_brain_context or "- No second-brain context retrieved for this request."
today = datetime.now().strftime("%Y-%m-%d")
return { return {
"system": ( "system": (
"You are WiseClaw, a local-first assistant running on macOS. " "You are WiseClaw, a local-first assistant running on macOS. "
"Use tools carefully and obey terminal safety mode." "Keep replies concise, practical, and safe. "
f"Enabled tools: {', '.join(tool_names) if tool_names else 'none'}.\n"
f"Today's date: {today}\n"
f"Current workspace root: {workspace_root}\n"
"Relative file paths are relative to the workspace root.\n"
"When the user asks for current information such as today's price, exchange rate, latest news, or current status, do not invent or shift the year. Use today's date above and prefer tools for fresh data.\n"
"If the user asks for the working directory, use the terminal tool with `pwd`.\n"
"If the user names a local file such as README.md, try that relative path first with the files tool.\n"
"If the user asks you to create or update files, use the files tool with action `write`.\n"
"If the user asks you to create a note in Apple Notes, use apple_notes with action `create_note`.\n"
"If the user asks about their saved notes, documents, archive, workspace knowledge, or second brain, use second_brain or the injected second-brain context before answering.\n"
"For a static HTML/CSS/JS app, write the files first, then use the terminal tool to run a local server in the background with a command like `python3 -m http.server 9990 -d <folder>`.\n"
"If the user asks you to open, inspect, interact with, or extract information from a website in a real browser, use browser_use.\n"
"If the user asks you to inspect files, browse the web, or run terminal commands, use the matching tool instead of guessing. "
"If a required tool fails or is unavailable, say that clearly and do not pretend you completed the action.\n"
"Retrieved second-brain context for this request:\n"
f"{second_brain_lines}\n"
"Saved user profile preferences:\n"
f"{profile_lines}\n"
"Recent memory:\n"
f"{memory_lines}"
), ),
"message": message, "message": message,
"model": runtime.default_model, "model": runtime.local_model if runtime.model_provider == "local" else runtime.zai_model,
"memory": memory, "memory": memory,
"available_tools": [tool.name for tool in runtime.tools if tool.enabled], "available_tools": tool_names,
} }

View File

@@ -3,6 +3,7 @@ from contextlib import asynccontextmanager
from fastapi import FastAPI from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from app.automation.scheduler import AutomationScheduler
from app.admin.routes import router as admin_router from app.admin.routes import router as admin_router
from app.config import get_settings from app.config import get_settings
from app.db import init_db, session_scope from app.db import init_db, session_scope
@@ -21,6 +22,8 @@ async def lifespan(_: FastAPI):
if settings.telegram_bot_token: if settings.telegram_bot_token:
runtime_services.telegram_bot = TelegramBotService(settings.telegram_bot_token, session_scope) runtime_services.telegram_bot = TelegramBotService(settings.telegram_bot_token, session_scope)
await runtime_services.telegram_bot.start() await runtime_services.telegram_bot.start()
runtime_services.automation_scheduler = AutomationScheduler(session_scope, runtime_services.telegram_bot)
await runtime_services.automation_scheduler.start()
yield yield
await runtime_services.shutdown() await runtime_services.shutdown()
@@ -30,6 +33,7 @@ app = FastAPI(title="WiseClaw", version="0.1.0", lifespan=lifespan)
app.add_middleware( app.add_middleware(
CORSMiddleware, CORSMiddleware,
allow_origins=["http://127.0.0.1:5173", "http://localhost:5173"], allow_origins=["http://127.0.0.1:5173", "http://localhost:5173"],
allow_origin_regex=r"^https?://(localhost|127\.0\.0\.1|192\.168\.\d{1,3}\.\d{1,3})(:\d+)?$",
allow_credentials=True, allow_credentials=True,
allow_methods=["*"], allow_methods=["*"],
allow_headers=["*"], allow_headers=["*"],

View File

@@ -6,6 +6,9 @@ from pydantic import BaseModel, Field
TerminalMode = Literal[1, 2, 3] TerminalMode = Literal[1, 2, 3]
SearchProvider = Literal["brave", "searxng"] SearchProvider = Literal["brave", "searxng"]
ModelProvider = Literal["local", "zai"]
AutomationScheduleType = Literal["daily", "weekdays", "weekly", "hourly"]
AutomationStatus = Literal["active", "paused"]
class HealthStatus(BaseModel): class HealthStatus(BaseModel):
@@ -32,14 +35,38 @@ class UserRecord(BaseModel):
is_active: bool = True is_active: bool = True
class UserProfileRecord(BaseModel):
telegram_user_id: int
display_name: str | None = None
bio: str | None = None
occupation: str | None = None
primary_use_cases: list[str] = Field(default_factory=list)
answer_priorities: list[str] = Field(default_factory=list)
tone_preference: str | None = None
response_length: str | None = None
language_preference: str | None = None
workflow_preference: str | None = None
interests: list[str] = Field(default_factory=list)
approval_preferences: list[str] = Field(default_factory=list)
avoid_preferences: str | None = None
onboarding_completed: bool = False
last_onboarding_step: int = 0
class RuntimeSettings(BaseModel): class RuntimeSettings(BaseModel):
terminal_mode: TerminalMode = 3 terminal_mode: TerminalMode = 3
search_provider: SearchProvider = "brave" search_provider: SearchProvider = "brave"
ollama_base_url: str = "http://127.0.0.1:11434" model_provider: ModelProvider = "local"
default_model: str = "qwen3.5:4b" local_base_url: str = "http://127.0.0.1:1234"
local_model: str = "qwen3-vl-8b-instruct-mlx@5bit"
zai_model: Literal["glm-4.7", "glm-5"] = "glm-5"
anythingllm_base_url: str = "http://127.0.0.1:3001"
anythingllm_workspace_slug: str = "wiseclaw"
tools: list[ToolToggle] = Field( tools: list[ToolToggle] = Field(
default_factory=lambda: [ default_factory=lambda: [
ToolToggle(name="brave_search", enabled=True), ToolToggle(name="brave_search", enabled=True),
ToolToggle(name="second_brain", enabled=True),
ToolToggle(name="browser_use", enabled=True),
ToolToggle(name="searxng_search", enabled=False), ToolToggle(name="searxng_search", enabled=False),
ToolToggle(name="web_fetch", enabled=True), ToolToggle(name="web_fetch", enabled=True),
ToolToggle(name="apple_notes", enabled=True), ToolToggle(name="apple_notes", enabled=True),
@@ -65,6 +92,7 @@ class MemoryRecord(BaseModel):
class OllamaStatus(BaseModel): class OllamaStatus(BaseModel):
reachable: bool reachable: bool
provider: ModelProvider = "local"
base_url: str base_url: str
model: str model: str
installed_models: list[str] = Field(default_factory=list) installed_models: list[str] = Field(default_factory=list)
@@ -75,3 +103,20 @@ class TelegramStatus(BaseModel):
configured: bool configured: bool
polling_active: bool polling_active: bool
message: str message: str
class AutomationRecord(BaseModel):
id: int
telegram_user_id: int
name: str
prompt: str
schedule_type: AutomationScheduleType
interval_hours: int | None = None
time_of_day: str | None = None
days_of_week: list[str] = Field(default_factory=list)
status: AutomationStatus = "active"
last_run_at: datetime | None = None
next_run_at: datetime | None = None
last_result: str | None = None
created_at: datetime
updated_at: datetime

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,276 @@
import json
from datetime import datetime
from sqlalchemy.orm import Session
from app.db import AuditLogORM, TelegramUserProfileORM
from app.models import UserProfileRecord
SKIP_TOKENS = {"pas", "gec", "geç", "skip", "-"}
ONBOARDING_QUESTIONS: list[dict[str, str]] = [
{"field": "display_name", "prompt": "1/12 Sana nasıl hitap etmeliyim?"},
{"field": "bio", "prompt": "2/12 Kısaca kendini nasıl tanıtırsın?"},
{"field": "occupation", "prompt": "3/12 En çok hangi işle uğraşıyorsun?"},
{"field": "primary_use_cases", "prompt": "4/12 WiseClaw'ı en çok hangi işler için kullanacaksın? Virgülle ayırabilirsin."},
{"field": "answer_priorities", "prompt": "5/12 Cevaplarımda en çok neye önem veriyorsun? Örnek: hız, detay, yaratıcılık, teknik doğruluk."},
{"field": "tone_preference", "prompt": "6/12 Nasıl bir tonda konuşayım?"},
{"field": "response_length", "prompt": "7/12 Cevaplar kısa mı, orta mı, detaylı mı olsun?"},
{"field": "language_preference", "prompt": "8/12 Hangi dilde konuşalım?"},
{"field": "workflow_preference", "prompt": "9/12 İşlerde önce plan mı istersin, yoksa direkt aksiyon mu?"},
{"field": "interests", "prompt": "10/12 Özellikle ilgilendiğin konular veya hobilerin neler? Virgülle ayırabilirsin."},
{"field": "approval_preferences", "prompt": "11/12 Onay almadan yapmamamı istediğin şeyler neler? Virgülle ayırabilirsin."},
{"field": "avoid_preferences", "prompt": "12/12 Özellikle kaçınmamı istediğin bir üslup veya davranış var mı?"},
]
class UserProfileService:
def __init__(self, session: Session) -> None:
self.session = session
def get_profile(self, telegram_user_id: int) -> UserProfileRecord | None:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
return None
return self._to_record(record)
def start_onboarding(self, telegram_user_id: int) -> str:
record = self._get_or_create_profile(telegram_user_id)
record.onboarding_completed = False
record.last_onboarding_step = 0
record.updated_at = datetime.utcnow()
self.session.add(
AuditLogORM(category="profile", message=f"profile:onboarding-started:{telegram_user_id}")
)
self.session.flush()
intro = (
"Ben WiseClaw. Seni daha iyi tanimak ve cevaplarimi sana gore ayarlamak icin 12 kisa soru soracagim.\n"
"Istersen herhangi bir soruya `pas` diyerek gecebilirsin.\n\n"
)
return intro + ONBOARDING_QUESTIONS[0]["prompt"]
def reset_onboarding(self, telegram_user_id: int) -> str:
record = self._get_or_create_profile(telegram_user_id)
record.display_name = None
record.bio = None
record.occupation = None
record.primary_use_cases = "[]"
record.answer_priorities = "[]"
record.tone_preference = None
record.response_length = None
record.language_preference = None
record.workflow_preference = None
record.interests = "[]"
record.approval_preferences = "[]"
record.avoid_preferences = None
record.onboarding_completed = False
record.last_onboarding_step = 0
record.updated_at = datetime.utcnow()
self.session.add(
AuditLogORM(category="profile", message=f"profile:onboarding-reset:{telegram_user_id}")
)
self.session.flush()
return "Profil sifirlandi. /tanisalim yazarak tekrar baslayabiliriz."
def is_onboarding_active(self, telegram_user_id: int) -> bool:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
return False
return not record.onboarding_completed and record.last_onboarding_step < len(ONBOARDING_QUESTIONS)
def answer_onboarding(self, telegram_user_id: int, text: str) -> tuple[str, bool]:
record = self._get_or_create_profile(telegram_user_id)
step = min(record.last_onboarding_step, len(ONBOARDING_QUESTIONS) - 1)
question = ONBOARDING_QUESTIONS[step]
self._apply_answer(record, question["field"], text)
record.last_onboarding_step = step + 1
record.updated_at = datetime.utcnow()
if record.last_onboarding_step >= len(ONBOARDING_QUESTIONS):
record.onboarding_completed = True
self.session.add(
AuditLogORM(category="profile", message=f"profile:onboarding-completed:{telegram_user_id}")
)
self.session.flush()
return self.render_completion_message(record), True
self.session.add(
AuditLogORM(
category="profile",
message=f"profile:onboarding-step:{telegram_user_id}:{record.last_onboarding_step}",
)
)
self.session.flush()
return ONBOARDING_QUESTIONS[record.last_onboarding_step]["prompt"], False
def render_profile_summary(self, telegram_user_id: int) -> str:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
return "Henuz bir profilin yok. /tanisalim yazarak baslayabiliriz."
profile = self._to_record(record)
lines = [
"Profil ozetin:",
f"- Hitap: {profile.display_name or 'belirtilmedi'}",
f"- Kisa tanitim: {profile.bio or 'belirtilmedi'}",
f"- Ugras alani: {profile.occupation or 'belirtilmedi'}",
f"- Kullanim amaci: {', '.join(profile.primary_use_cases) if profile.primary_use_cases else 'belirtilmedi'}",
f"- Oncelikler: {', '.join(profile.answer_priorities) if profile.answer_priorities else 'belirtilmedi'}",
f"- Ton: {profile.tone_preference or 'belirtilmedi'}",
f"- Cevap uzunlugu: {profile.response_length or 'belirtilmedi'}",
f"- Dil: {profile.language_preference or 'belirtilmedi'}",
f"- Calisma bicimi: {profile.workflow_preference or 'belirtilmedi'}",
f"- Ilgi alanlari: {', '.join(profile.interests) if profile.interests else 'belirtilmedi'}",
f"- Onay beklentileri: {', '.join(profile.approval_preferences) if profile.approval_preferences else 'belirtilmedi'}",
f"- Kacinmami istedigin seyler: {profile.avoid_preferences or 'belirtilmedi'}",
]
if not profile.onboarding_completed:
lines.append(
f"- Durum: onboarding devam ediyor, sira {profile.last_onboarding_step + 1}/{len(ONBOARDING_QUESTIONS)}"
)
return "\n".join(lines)
def render_preferences_summary(self, telegram_user_id: int) -> str:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
return "Henuz tercihlerin kayitli degil. /tanisalim ile baslayabiliriz."
profile = self._to_record(record)
return "\n".join(
[
"Tercihlerin:",
f"- Ton: {profile.tone_preference or 'belirtilmedi'}",
f"- Cevap uzunlugu: {profile.response_length or 'belirtilmedi'}",
f"- Dil: {profile.language_preference or 'belirtilmedi'}",
f"- Calisma bicimi: {profile.workflow_preference or 'belirtilmedi'}",
f"- Oncelikler: {', '.join(profile.answer_priorities) if profile.answer_priorities else 'belirtilmedi'}",
f"- Onay beklentileri: {', '.join(profile.approval_preferences) if profile.approval_preferences else 'belirtilmedi'}",
f"- Kacinmami istedigin seyler: {profile.avoid_preferences or 'belirtilmedi'}",
]
)
def build_prompt_profile(self, telegram_user_id: int) -> str:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
return ""
profile = self._to_record(record)
instructions: list[str] = []
if profile.display_name:
instructions.append(f"Kullaniciya `{profile.display_name}` diye hitap edebilirsin.")
if profile.language_preference:
instructions.append(f"Varsayilan dili `{profile.language_preference}` olarak kullan.")
if profile.tone_preference:
instructions.append(f"Cevap tonunu su tercihe uydur: {profile.tone_preference}.")
if profile.response_length:
instructions.append(f"Varsayilan cevap uzunlugu tercihi: {profile.response_length}.")
if profile.workflow_preference:
instructions.append(f"Is yapis tarzinda su tercihe uy: {profile.workflow_preference}.")
if profile.answer_priorities:
instructions.append(
"Kullanici su niteliklere oncelik veriyor: " + ", ".join(profile.answer_priorities) + "."
)
if profile.primary_use_cases:
instructions.append(
"WiseClaw'i en cok su isler icin kullaniyor: " + ", ".join(profile.primary_use_cases) + "."
)
if profile.interests:
instructions.append(
"Gerekirse ornekleri su ilgi alanlarina yaklastir: " + ", ".join(profile.interests) + "."
)
if profile.approval_preferences:
instructions.append(
"Su konularda once onay bekle: " + ", ".join(profile.approval_preferences) + "."
)
if profile.avoid_preferences:
instructions.append(f"Su uslup veya davranislardan kacin: {profile.avoid_preferences}.")
return "\n".join(f"- {item}" for item in instructions)
def profile_memory_summary(self, telegram_user_id: int) -> str:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
return ""
profile = self._to_record(record)
parts = []
if profile.display_name:
parts.append(f"hitap={profile.display_name}")
if profile.language_preference:
parts.append(f"dil={profile.language_preference}")
if profile.tone_preference:
parts.append(f"ton={profile.tone_preference}")
if profile.response_length:
parts.append(f"uzunluk={profile.response_length}")
if profile.workflow_preference:
parts.append(f"calisma={profile.workflow_preference}")
if profile.primary_use_cases:
parts.append("amac=" + ",".join(profile.primary_use_cases[:3]))
return "profile_summary:" + "; ".join(parts)
def _get_or_create_profile(self, telegram_user_id: int) -> TelegramUserProfileORM:
record = self.session.get(TelegramUserProfileORM, telegram_user_id)
if record is None:
record = TelegramUserProfileORM(
telegram_user_id=telegram_user_id,
primary_use_cases="[]",
answer_priorities="[]",
interests="[]",
approval_preferences="[]",
onboarding_completed=False,
last_onboarding_step=0,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
)
self.session.add(record)
self.session.flush()
return record
def _apply_answer(self, record: TelegramUserProfileORM, field: str, answer: str) -> None:
cleaned = answer.strip()
if cleaned.lower() in SKIP_TOKENS:
return
if field in {"primary_use_cases", "answer_priorities", "interests", "approval_preferences"}:
setattr(record, field, json.dumps(self._split_list(cleaned), ensure_ascii=False))
return
setattr(record, field, cleaned)
def _split_list(self, value: str) -> list[str]:
parts = [item.strip() for item in value.replace("\n", ",").split(",")]
return [item for item in parts if item]
def _decode_list(self, value: str) -> list[str]:
try:
payload = json.loads(value)
except json.JSONDecodeError:
return []
if not isinstance(payload, list):
return []
return [str(item).strip() for item in payload if str(item).strip()]
def _to_record(self, record: TelegramUserProfileORM) -> UserProfileRecord:
return UserProfileRecord(
telegram_user_id=record.telegram_user_id,
display_name=record.display_name,
bio=record.bio,
occupation=record.occupation,
primary_use_cases=self._decode_list(record.primary_use_cases),
answer_priorities=self._decode_list(record.answer_priorities),
tone_preference=record.tone_preference,
response_length=record.response_length,
language_preference=record.language_preference,
workflow_preference=record.workflow_preference,
interests=self._decode_list(record.interests),
approval_preferences=self._decode_list(record.approval_preferences),
avoid_preferences=record.avoid_preferences,
onboarding_completed=record.onboarding_completed,
last_onboarding_step=record.last_onboarding_step,
)
def render_completion_message(self, record: TelegramUserProfileORM) -> str:
profile = self._to_record(record)
summary = [
"Seni tanidim ve tercihlerini kaydettim.",
f"- Hitap: {profile.display_name or 'belirtilmedi'}",
f"- Ton: {profile.tone_preference or 'belirtilmedi'}",
f"- Dil: {profile.language_preference or 'belirtilmedi'}",
f"- Cevap uzunlugu: {profile.response_length or 'belirtilmedi'}",
f"- Calisma bicimi: {profile.workflow_preference or 'belirtilmedi'}",
]
return "\n".join(summary)

View File

@@ -1,14 +1,18 @@
from contextlib import suppress from contextlib import suppress
from app.automation.scheduler import AutomationScheduler
from app.telegram.bot import TelegramBotService from app.telegram.bot import TelegramBotService
class RuntimeServices: class RuntimeServices:
def __init__(self) -> None: def __init__(self) -> None:
self.telegram_bot: TelegramBotService | None = None self.telegram_bot: TelegramBotService | None = None
self.automation_scheduler: AutomationScheduler | None = None
async def shutdown(self) -> None: async def shutdown(self) -> None:
if self.automation_scheduler is not None:
with suppress(Exception):
await self.automation_scheduler.stop()
if self.telegram_bot is not None: if self.telegram_bot is not None:
with suppress(Exception): with suppress(Exception):
await self.telegram_bot.stop() await self.telegram_bot.stop()

View File

@@ -0,0 +1,218 @@
from __future__ import annotations
import json
from datetime import datetime
from pathlib import Path
import httpx
from sqlalchemy import select
from sqlalchemy.orm import Session
from app.config import get_settings
from app.db import AuditLogORM, SecondBrainCaptureORM, SecondBrainNoteORM, SecretORM, SettingORM
class SecondBrainService:
FILENAME = "second_brain.md"
def __init__(self, session: Session) -> None:
self.session = session
def start_capture(self, telegram_user_id: int) -> str:
record = self.session.get(SecondBrainCaptureORM, telegram_user_id)
if record is None:
record = SecondBrainCaptureORM(
telegram_user_id=telegram_user_id,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
)
self.session.add(record)
else:
record.updated_at = datetime.utcnow()
self.session.add(
AuditLogORM(category="second_brain", message=f"second_brain:capture-start:{telegram_user_id}")
)
self.session.flush()
return "Second brain notunu gonder. Iptal etmek istersen /iptal yazabilirsin."
def is_capture_active(self, telegram_user_id: int) -> bool:
return self.session.get(SecondBrainCaptureORM, telegram_user_id) is not None
def cancel_capture(self, telegram_user_id: int) -> str:
record = self.session.get(SecondBrainCaptureORM, telegram_user_id)
if record is not None:
self.session.delete(record)
self.session.add(
AuditLogORM(category="second_brain", message=f"second_brain:capture-cancel:{telegram_user_id}")
)
self.session.flush()
return "Second brain not ekleme akisini durdurdum."
async def save_note_and_sync(self, telegram_user_id: int, text: str, workspace_root: Path) -> str:
content = text.strip()
if not content:
return "Bos bir not kaydedemem. Lutfen not metnini gonder."
capture = self.session.get(SecondBrainCaptureORM, telegram_user_id)
if capture is not None:
self.session.delete(capture)
note = SecondBrainNoteORM(
telegram_user_id=telegram_user_id,
content=content,
source="telegram",
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
)
self.session.add(note)
self.session.flush()
markdown_path = self._write_markdown(workspace_root)
sync_result = await self._sync_markdown(markdown_path)
self.session.add(
AuditLogORM(
category="second_brain",
message=f"second_brain:note-saved:{telegram_user_id}:{note.id}",
)
)
self.session.add(
AuditLogORM(
category="second_brain",
message=f"second_brain:sync:{json.dumps(sync_result, ensure_ascii=False)}",
)
)
self.session.flush()
if sync_result["status"] != "ok":
message = str(sync_result.get("message", "Second brain sync failed."))
return f"Notu kaydettim ama AnythingLLM senkronu basarisiz oldu: {message}"
return "Notunu kaydettim ve ikinci beynine senkronladim."
def _write_markdown(self, workspace_root: Path) -> Path:
notes = list(
self.session.scalars(
select(SecondBrainNoteORM).order_by(SecondBrainNoteORM.created_at.asc(), SecondBrainNoteORM.id.asc())
)
)
lines = [
"# Second Brain",
"",
"WiseClaw tarafindan Telegram notlarindan uretilen senkron belge.",
"",
]
for note in notes:
timestamp = note.created_at.strftime("%Y-%m-%d %H:%M:%S")
lines.extend(
[
f"## Note {note.id} - {timestamp}",
f"- Source: {note.source}",
f"- Telegram User: {note.telegram_user_id}",
"",
note.content,
"",
]
)
markdown_path = workspace_root / "backend" / self.FILENAME
markdown_path.write_text("\n".join(lines).strip() + "\n", encoding="utf-8")
return markdown_path
async def _sync_markdown(self, markdown_path: Path) -> dict[str, object]:
settings = get_settings()
runtime_settings = {
item.key: item.value for item in self.session.scalars(select(SettingORM))
}
base_url = runtime_settings.get("anythingllm_base_url", settings.anythingllm_base_url).rstrip("/")
workspace_slug = runtime_settings.get("anythingllm_workspace_slug", settings.anythingllm_workspace_slug).strip()
secret = self.session.get(SecretORM, "anythingllm_api_key")
api_key = secret.value if secret else settings.anythingllm_api_key
if not base_url:
return {"status": "error", "message": "AnythingLLM base URL is not configured."}
if not workspace_slug:
return {"status": "error", "message": "AnythingLLM workspace slug is not configured."}
if not api_key:
return {"status": "error", "message": "AnythingLLM API key is not configured."}
headers = {"Authorization": f"Bearer {api_key}"}
try:
async with httpx.AsyncClient(timeout=30.0) as client:
workspace_response = await client.get(
f"{base_url}/api/v1/workspace/{workspace_slug}",
headers=headers,
)
workspace_response.raise_for_status()
workspace_payload = workspace_response.json()
deletes = self._find_existing_second_brain_docs(workspace_payload)
if deletes:
delete_response = await client.post(
f"{base_url}/api/v1/workspace/{workspace_slug}/update-embeddings",
headers={**headers, "Content-Type": "application/json"},
json={"deletes": deletes},
)
delete_response.raise_for_status()
with markdown_path.open("rb") as file_handle:
upload_response = await client.post(
f"{base_url}/api/v1/document/upload",
headers=headers,
files={"file": (markdown_path.name, file_handle, "text/markdown")},
)
upload_response.raise_for_status()
upload_payload = upload_response.json()
uploaded_location = self._extract_uploaded_location(upload_payload)
if not uploaded_location:
return {"status": "error", "message": "AnythingLLM upload did not return a document location."}
attach_response = await client.post(
f"{base_url}/api/v1/workspace/{workspace_slug}/update-embeddings",
headers={**headers, "Content-Type": "application/json"},
json={"adds": [uploaded_location]},
)
attach_response.raise_for_status()
except httpx.HTTPError as exc:
return {"status": "error", "message": str(exc)}
return {"status": "ok", "location": uploaded_location, "deleted": deletes}
def _find_existing_second_brain_docs(self, workspace_payload: dict[str, object]) -> list[str]:
documents = []
workspace_items = workspace_payload.get("workspace", [])
if isinstance(workspace_items, list) and workspace_items:
first = workspace_items[0]
if isinstance(first, dict):
documents = first.get("documents", [])
if not isinstance(documents, list):
return []
paths: list[str] = []
for item in documents:
if not isinstance(item, dict):
continue
filename = str(item.get("filename", "")).strip()
docpath = str(item.get("docpath", "")).strip()
metadata_raw = item.get("metadata")
metadata_title = ""
if isinstance(metadata_raw, str):
try:
metadata = json.loads(metadata_raw)
if isinstance(metadata, dict):
metadata_title = str(metadata.get("title", "")).strip()
except json.JSONDecodeError:
metadata_title = ""
if (
filename.startswith(f"{Path(self.FILENAME).stem}.md-")
or filename.startswith(self.FILENAME)
or metadata_title == self.FILENAME
) and docpath:
paths.append(docpath)
return paths
def _extract_uploaded_location(self, payload: dict[str, object]) -> str:
documents = payload.get("documents", [])
if not isinstance(documents, list) or not documents:
return ""
first = documents[0]
if not isinstance(first, dict):
return ""
return str(first.get("location", "")).strip()

View File

@@ -19,6 +19,9 @@ SAFE_COMMAND_PREFIXES = (
"whoami", "whoami",
"uname", "uname",
"ps", "ps",
"python3 -m http.server",
"python -m http.server",
"npm run build",
) )
APPROVAL_REQUIRED_PREFIXES = ( APPROVAL_REQUIRED_PREFIXES = (
@@ -74,4 +77,3 @@ def evaluate_terminal_command(command: str, mode: int) -> TerminalDecision:
return TerminalDecision(decision="approval", reason="Command needs approval.") return TerminalDecision(decision="approval", reason="Command needs approval.")
return TerminalDecision(decision="approval", reason="Unknown command defaults to approval.") return TerminalDecision(decision="approval", reason="Unknown command defaults to approval.")

View File

@@ -0,0 +1,80 @@
def get_game_template_hint(request_text: str) -> str:
lowered = request_text.lower()
if "three.js" in lowered or "threejs" in lowered or "webgl" in lowered or "3d" in lowered:
return THREE_JS_TEMPLATE_HINT
if "phaser" in lowered:
return PHASER_TEMPLATE_HINT
if "canvas" in lowered or "snake" in lowered or "pong" in lowered or "tetris" in lowered:
return CANVAS_TEMPLATE_HINT
return ""
CANVAS_TEMPLATE_HINT = """
Starter template guidance for a plain canvas game:
index.html
- Create a centered app shell with:
- a header area for title and score
- a main game canvas
- a mobile controls section with large directional/action buttons
- a restart button
style.css
- Use a responsive layout that stacks nicely on mobile.
- Keep the canvas visible without horizontal scrolling.
- Add `touch-action: none;` for interactive game controls.
- Use clear visual contrast and large tap targets.
script.js
- Create explicit game state variables.
- Create a `resizeGame()` function if canvas sizing matters.
- Create a `startGame()` / `resetGame()` flow.
- Create a `gameLoop()` driven by `requestAnimationFrame` or a timed tick.
- Add keyboard listeners and touch/click listeners.
- Keep gameplay fully self-contained without external assets.
"""
THREE_JS_TEMPLATE_HINT = """
Starter template guidance for a Three.js browser game:
index.html
- Include a UI overlay for score, status, and restart.
- Load Three.js with a browser-safe CDN module import in script.js.
style.css
- Full-viewport scene layout.
- Overlay HUD pinned above the renderer.
- Mobile-safe action buttons if touch input is needed.
script.js
- Set up:
- scene
- perspective camera
- renderer sized to the viewport
- ambient + directional light
- resize handler
- animation loop
- Keep geometry lightweight for mobile.
- Use simple primitives and colors instead of relying on asset pipelines.
- Implement gameplay logic on top of the render loop, not just a visual demo.
"""
PHASER_TEMPLATE_HINT = """
Starter template guidance for a Phaser game:
index.html
- Include a HUD area for score and status.
- Load Phaser from a browser-ready CDN.
style.css
- Center the game canvas and ensure it scales on mobile.
- Add large touch-friendly controls when needed.
script.js
- Use a Phaser config with `type`, `width`, `height`, `parent`, `backgroundColor`, and scaling rules.
- Create at least one scene with `preload`, `create`, and `update`.
- Use primitive graphics or generated shapes if no external assets are required.
- Add restart behavior and visible score/status updates outside or inside the Phaser scene.
"""

View File

@@ -1,12 +1,18 @@
import asyncio
import json
from contextlib import suppress
from typing import Any from typing import Any
from telegram import Update from telegram import BotCommand, InputMediaPhoto, Update
from telegram.constants import ChatAction
from telegram.ext import Application, CommandHandler, ContextTypes, MessageHandler, filters from telegram.ext import Application, CommandHandler, ContextTypes, MessageHandler, filters
from app.orchestrator import WiseClawOrchestrator from app.orchestrator import WiseClawOrchestrator
class TelegramBotService: class TelegramBotService:
MAX_MESSAGE_LEN = 3500
def __init__(self, token: str, orchestrator_factory: Any) -> None: def __init__(self, token: str, orchestrator_factory: Any) -> None:
self.token = token self.token = token
self.orchestrator_factory = orchestrator_factory self.orchestrator_factory = orchestrator_factory
@@ -15,15 +21,73 @@ class TelegramBotService:
async def process_message(self, telegram_user_id: int, text: str) -> str: async def process_message(self, telegram_user_id: int, text: str) -> str:
with self.orchestrator_factory() as session: with self.orchestrator_factory() as session:
orchestrator = WiseClawOrchestrator(session) orchestrator = WiseClawOrchestrator(session)
return orchestrator.handle_text_message(telegram_user_id=telegram_user_id, text=text) return await orchestrator.handle_text_message(telegram_user_id=telegram_user_id, text=text)
async def process_message_payload(self, telegram_user_id: int, text: str) -> dict[str, object]:
with self.orchestrator_factory() as session:
orchestrator = WiseClawOrchestrator(session)
payload = await orchestrator.handle_message_payload(telegram_user_id=telegram_user_id, text=text)
text_value = str(payload.get("text", ""))
if text_value.startswith("__WC_MEDIA__"):
try:
decoded = json.loads(text_value[len("__WC_MEDIA__") :])
except json.JSONDecodeError:
return {"text": text_value, "media": []}
return {
"text": str(decoded.get("text", "")),
"media": decoded.get("media", []) if isinstance(decoded.get("media"), list) else [],
}
return payload
async def send_message(self, chat_id: int, text: str) -> None:
if self.application is None:
return
for chunk in self._chunk_message(text):
await self.application.bot.send_message(chat_id=chat_id, text=chunk)
async def send_media(self, chat_id: int, media: list[dict[str, str]]) -> None:
if self.application is None:
return
clean_media = [item for item in media[:3] if item.get("url")]
if not clean_media:
return
if len(clean_media) == 1:
item = clean_media[0]
try:
await self.application.bot.send_photo(chat_id=chat_id, photo=item["url"], caption=item.get("caption", "")[:1024])
except Exception:
return
return
media_group = []
for item in clean_media:
media_group.append(InputMediaPhoto(media=item["url"], caption=item.get("caption", "")[:1024]))
try:
await self.application.bot.send_media_group(chat_id=chat_id, media=media_group)
except Exception:
for item in clean_media:
try:
await self.application.bot.send_photo(chat_id=chat_id, photo=item["url"], caption=item.get("caption", "")[:1024])
except Exception:
continue
async def start(self) -> None: async def start(self) -> None:
if not self.token: if not self.token:
return return
self.application = Application.builder().token(self.token).build() self.application = Application.builder().token(self.token).build()
self.application.add_handler(CommandHandler("start", self._on_start)) self.application.add_handler(CommandHandler("start", self._on_start))
self.application.add_handler(CommandHandler("tanisalim", self._on_command_passthrough))
self.application.add_handler(CommandHandler("profilim", self._on_command_passthrough))
self.application.add_handler(CommandHandler("tercihlerim", self._on_command_passthrough))
self.application.add_handler(CommandHandler("tanisalim_sifirla", self._on_command_passthrough))
self.application.add_handler(CommandHandler("otomasyon_ekle", self._on_command_passthrough))
self.application.add_handler(CommandHandler("otomasyonlar", self._on_command_passthrough))
self.application.add_handler(CommandHandler("otomasyon_durdur", self._on_command_passthrough))
self.application.add_handler(CommandHandler("otomasyon_baslat", self._on_command_passthrough))
self.application.add_handler(CommandHandler("otomasyon_sil", self._on_command_passthrough))
self.application.add_handler(CommandHandler("notlarima_ekle", self._on_command_passthrough))
self.application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, self._on_text)) self.application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, self._on_text))
await self.application.initialize() await self.application.initialize()
await self.application.bot.set_my_commands(self._telegram_commands())
await self.application.start() await self.application.start()
await self.application.updater.start_polling(drop_pending_updates=True) await self.application.updater.start_polling(drop_pending_updates=True)
@@ -44,8 +108,72 @@ class TelegramBotService:
) )
async def _on_text(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: async def _on_text(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
if update.message is None or update.effective_user is None or update.message.text is None:
return
typing_task = asyncio.create_task(self._send_typing(update.effective_chat.id, context))
try:
reply = await self.process_message_payload(update.effective_user.id, update.message.text)
finally:
typing_task.cancel()
with suppress(asyncio.CancelledError):
await typing_task
media = reply.get("media", []) if isinstance(reply, dict) else []
if isinstance(media, list) and media:
await self.send_media(
update.effective_chat.id,
[item for item in media if isinstance(item, dict)],
)
text_reply = str(reply.get("text", "")) if isinstance(reply, dict) else str(reply)
for chunk in self._chunk_message(text_reply):
await update.message.reply_text(chunk)
async def _on_command_passthrough(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
del context del context
if update.message is None or update.effective_user is None or update.message.text is None: if update.message is None or update.effective_user is None or update.message.text is None:
return return
reply = await self.process_message(update.effective_user.id, update.message.text) reply = await self.process_message_payload(update.effective_user.id, update.message.text)
await update.message.reply_text(reply) media = reply.get("media", []) if isinstance(reply, dict) else []
if isinstance(media, list) and media:
await self.send_media(
update.effective_chat.id,
[item for item in media if isinstance(item, dict)],
)
text_reply = str(reply.get("text", "")) if isinstance(reply, dict) else str(reply)
for chunk in self._chunk_message(text_reply):
await update.message.reply_text(chunk)
async def _send_typing(self, chat_id: int, context: ContextTypes.DEFAULT_TYPE) -> None:
while True:
await context.bot.send_chat_action(chat_id=chat_id, action=ChatAction.TYPING)
await asyncio.sleep(4)
def _chunk_message(self, text: str) -> list[str]:
if len(text) <= self.MAX_MESSAGE_LEN:
return [text]
chunks: list[str] = []
remaining = text
while len(remaining) > self.MAX_MESSAGE_LEN:
split_at = remaining.rfind("\n", 0, self.MAX_MESSAGE_LEN)
if split_at <= 0:
split_at = self.MAX_MESSAGE_LEN
chunks.append(remaining[:split_at].strip())
remaining = remaining[split_at:].strip()
if remaining:
chunks.append(remaining)
return chunks
def _telegram_commands(self) -> list[BotCommand]:
return [
BotCommand("start", "WiseClaw'i baslat (wc)"),
BotCommand("tanisalim", "12 soruluk tanisma akisini baslat (wc)"),
BotCommand("profilim", "Kayitli profil ozetimi goster (wc)"),
BotCommand("tercihlerim", "Kayitli iletisim tercihlerini goster (wc)"),
BotCommand("tanisalim_sifirla", "Tanisma profilini sifirla (wc)"),
BotCommand("otomasyon_ekle", "Yeni otomasyon wizard'ini baslat (wc)"),
BotCommand("otomasyonlar", "Otomasyon listesini goster (wc)"),
BotCommand("otomasyon_durdur", "Bir otomasyonu durdur: /otomasyon_durdur <id> (wc)"),
BotCommand("otomasyon_baslat", "Bir otomasyonu yeniden baslat: /otomasyon_baslat <id> (wc)"),
BotCommand("otomasyon_sil", "Bir otomasyonu sil: /otomasyon_sil <id> (wc)"),
BotCommand("notlarima_ekle", "Ikinci beyne yeni not ekle (wc)"),
]

View File

@@ -1,18 +1,150 @@
import asyncio
from typing import Any from typing import Any
from app.tools.base import Tool from app.tools.base import Tool
def _escape_applescript(value: str) -> str:
return value.replace("\\", "\\\\").replace('"', '\\"')
def _body_to_notes_html(title: str, body: str) -> str:
if not body:
return title
html_body = body.replace("\n", "<br>")
return f"{title}<br><br>{html_body}"
class AppleNotesTool(Tool): class AppleNotesTool(Tool):
name = "apple_notes" name = "apple_notes"
description = "Create notes in Apple Notes through AppleScript." description = "Create notes in Apple Notes through AppleScript."
async def run(self, payload: dict[str, Any]) -> dict[str, Any]: def parameters_schema(self) -> dict[str, Any]:
title = str(payload.get("title", "")).strip()
return { return {
"tool": self.name, "type": "object",
"status": "stub", "properties": {
"title": title, "action": {
"message": "Apple Notes integration is not wired yet.", "type": "string",
"enum": ["create_note"],
"description": "The Apple Notes action to perform.",
},
"title": {
"type": "string",
"description": "Title for the new note.",
},
"body": {
"type": "string",
"description": "Optional body content for the note.",
},
"folder": {
"type": "string",
"description": "Optional Notes folder name. Defaults to Notes.",
},
},
"required": ["action", "title"],
"additionalProperties": False,
} }
async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
action = str(payload.get("action", "create_note")).strip()
title = str(payload.get("title", "")).strip()
body = str(payload.get("body", "")).strip()
folder = str(payload.get("folder", "Notes")).strip() or "Notes"
if action != "create_note":
return {
"tool": self.name,
"status": "error",
"message": f"Unsupported action: {action}",
}
if not title:
return {
"tool": self.name,
"status": "error",
"message": "title is required.",
}
note_html = _body_to_notes_html(title, body)
script = f'''
tell application "Notes"
activate
if not (exists folder "{_escape_applescript(folder)}") then
make new folder with properties {{name:"{_escape_applescript(folder)}"}}
end if
set targetFolder to folder "{_escape_applescript(folder)}"
set newNote to make new note at targetFolder with properties {{body:"{_escape_applescript(note_html)}"}}
return id of newNote
end tell
'''.strip()
created = await self._run_osascript(script)
if created["status"] != "ok":
return {
"tool": self.name,
"status": "error",
"action": action,
"title": title,
"folder": folder,
"message": created["message"],
}
note_id = created["stdout"]
verify_script = f'''
tell application "Notes"
set matchedNotes to every note of folder "{_escape_applescript(folder)}" whose id is "{_escape_applescript(note_id)}"
if (count of matchedNotes) is 0 then
return "NOT_FOUND"
end if
set matchedNote to item 1 of matchedNotes
return name of matchedNote
end tell
'''.strip()
verified = await self._run_osascript(verify_script)
if verified["status"] != "ok":
return {
"tool": self.name,
"status": "error",
"action": action,
"title": title,
"folder": folder,
"note_id": note_id,
"message": f'Note was created but could not be verified: {verified["message"]}',
}
verified_title = verified["stdout"]
if verified_title == "NOT_FOUND":
return {
"tool": self.name,
"status": "error",
"action": action,
"title": title,
"folder": folder,
"note_id": note_id,
"message": "Note was created but could not be found during verification.",
}
return {
"tool": self.name,
"status": "ok",
"action": action,
"title": title,
"body": body,
"folder": folder,
"note_id": note_id,
"verified_title": verified_title,
}
async def _run_osascript(self, script: str) -> dict[str, str]:
process = await asyncio.create_subprocess_exec(
"osascript",
"-e",
script,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await process.communicate()
stdout_text = stdout.decode("utf-8", errors="replace").strip()
stderr_text = stderr.decode("utf-8", errors="replace").strip()
if process.returncode != 0:
return {"status": "error", "message": stderr_text or "AppleScript command failed.", "stdout": stdout_text}
return {"status": "ok", "message": "", "stdout": stdout_text}

View File

@@ -6,7 +6,19 @@ class Tool(ABC):
name: str name: str
description: str description: str
def definition(self) -> dict[str, Any]:
return {
"type": "function",
"function": {
"name": self.name,
"description": self.description,
"parameters": self.parameters_schema(),
},
}
def parameters_schema(self) -> dict[str, Any]:
return {"type": "object", "properties": {}}
@abstractmethod @abstractmethod
async def run(self, payload: dict[str, Any]) -> dict[str, Any]: async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
raise NotImplementedError raise NotImplementedError

View File

@@ -1,3 +1,4 @@
import httpx
from typing import Any from typing import Any
from app.tools.base import Tool from app.tools.base import Tool
@@ -7,12 +8,119 @@ class BraveSearchTool(Tool):
name = "brave_search" name = "brave_search"
description = "Search the web with Brave Search." description = "Search the web with Brave Search."
async def run(self, payload: dict[str, Any]) -> dict[str, Any]: def __init__(self, api_key: str) -> None:
query = str(payload.get("query", "")).strip() self.api_key = api_key
def parameters_schema(self) -> dict[str, Any]:
return { return {
"tool": self.name, "type": "object",
"status": "stub", "properties": {
"query": query, "query": {
"message": "Brave Search integration is not wired yet.", "type": "string",
"description": "The web search query.",
},
"count": {
"type": "integer",
"description": "Optional number of results from 1 to 10.",
"minimum": 1,
"maximum": 10,
},
"mode": {
"type": "string",
"description": "Search mode: web or images.",
"enum": ["web", "images"],
},
},
"required": ["query"],
"additionalProperties": False,
} }
async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
query = str(payload.get("query", "")).strip()
count = int(payload.get("count", 5) or 5)
count = max(1, min(10, count))
mode = str(payload.get("mode", "web") or "web").strip().lower()
if mode not in {"web", "images"}:
mode = "web"
if not query:
return {
"tool": self.name,
"status": "error",
"message": "Query is required.",
}
if not self.api_key:
return {
"tool": self.name,
"status": "error",
"query": query,
"message": "Brave Search API key is not configured.",
}
try:
async with httpx.AsyncClient(timeout=15.0) as client:
response = await client.get(
"https://api.search.brave.com/res/v1/images/search"
if mode == "images"
else "https://api.search.brave.com/res/v1/web/search",
headers={
"Accept": "application/json",
"Accept-Encoding": "gzip",
"X-Subscription-Token": self.api_key,
},
params={
"q": query,
"count": count,
"search_lang": "en",
"country": "us",
},
)
response.raise_for_status()
except httpx.HTTPError as exc:
return {
"tool": self.name,
"status": "error",
"query": query,
"message": str(exc),
}
payload_json = response.json()
if mode == "images":
images = []
for item in payload_json.get("results", [])[:count]:
images.append(
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"source": item.get("source", ""),
"thumbnail": item.get("thumbnail", {}).get("src", "") if isinstance(item.get("thumbnail"), dict) else "",
"properties_url": item.get("properties", {}).get("url", "") if isinstance(item.get("properties"), dict) else "",
}
)
return {
"tool": self.name,
"status": "ok",
"mode": mode,
"query": query,
"images": images,
"total_results": len(images),
}
results = []
for item in payload_json.get("web", {}).get("results", [])[:count]:
results.append(
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"description": item.get("description", ""),
}
)
return {
"tool": self.name,
"status": "ok",
"mode": mode,
"query": query,
"results": results,
"total_results": len(results),
}

View File

@@ -0,0 +1,296 @@
import asyncio
import json
import os
from pathlib import Path
from typing import Any
from urllib.parse import urlparse
import httpx
from app.config import Settings
from app.models import RuntimeSettings
from app.tools.base import Tool
class BrowserUseTool(Tool):
name = "browser_use"
description = (
"Use the browser-use agent for higher-level real browser tasks such as navigating sites, "
"extracting lists, comparing items, and completing multi-step browsing workflows."
)
def __init__(self, workspace_root: Path, runtime: RuntimeSettings, settings: Settings, api_key: str) -> None:
self.workspace_root = workspace_root.resolve()
self.runtime = runtime
self.settings = settings
self.api_key = api_key
self.debug_port = 9223 + (abs(hash(str(self.workspace_root))) % 200)
self.chromium_path = (
Path.home()
/ "Library"
/ "Caches"
/ "ms-playwright"
/ "chromium-1194"
/ "chrome-mac"
/ "Chromium.app"
/ "Contents"
/ "MacOS"
/ "Chromium"
)
def parameters_schema(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"task": {
"type": "string",
"description": "The high-level browser task to complete.",
},
"start_url": {
"type": "string",
"description": "Optional URL to open first before the agent starts.",
},
"max_steps": {
"type": "integer",
"description": "Maximum browser-use steps before stopping. Defaults to 20.",
},
"keep_alive": {
"type": "boolean",
"description": "Keep the browser open after the run finishes.",
},
"allowed_domains": {
"type": "array",
"items": {"type": "string"},
"description": "Optional list of allowed domains for the run.",
},
},
"required": ["task"],
"additionalProperties": False,
}
async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
task = str(payload.get("task", "")).strip()
if not task:
return {"tool": self.name, "status": "error", "message": "task is required."}
start_url = str(payload.get("start_url", "")).strip()
max_steps = int(payload.get("max_steps", 20))
keep_alive = bool(payload.get("keep_alive", False))
allowed_domains = self._normalize_domains(payload.get("allowed_domains"))
if start_url and not allowed_domains:
host = urlparse(start_url).netloc
if host:
allowed_domains = [host]
llm_error = self._provider_readiness_error()
if llm_error is not None:
return {"tool": self.name, "status": "error", "message": llm_error}
try:
result = await self._run_agent(
task=self._compose_task(task, start_url),
max_steps=max_steps,
keep_alive=keep_alive,
allowed_domains=allowed_domains,
)
except Exception as exc:
return {
"tool": self.name,
"status": "error",
"message": str(exc),
}
return {
"tool": self.name,
"status": "ok" if result["success"] else "error",
**result,
}
async def _run_agent(
self,
task: str,
max_steps: int,
keep_alive: bool,
allowed_domains: list[str],
) -> dict[str, Any]:
from browser_use import Agent, Browser, ChatAnthropic, ChatOpenAI
cdp_url = await self._ensure_persistent_browser()
browser = Browser(
cdp_url=cdp_url,
is_local=True,
keep_alive=True,
allowed_domains=allowed_domains or None,
)
llm = self._build_llm(ChatAnthropic=ChatAnthropic, ChatOpenAI=ChatOpenAI)
agent = Agent(
task=task,
llm=llm,
browser=browser,
use_vision=True,
enable_planning=False,
max_actions_per_step=3,
display_files_in_done_text=False,
)
try:
history = await agent.run(max_steps=max_steps)
final_result = history.final_result() or ""
extracted = history.extracted_content()
errors = [error for error in history.errors() if error]
urls = [url for url in history.urls() if url]
return {
"success": bool(history.is_successful()),
"final_result": final_result,
"extracted_content": extracted[-10:],
"errors": errors[-5:],
"urls": urls[-10:],
"steps": history.number_of_steps(),
"actions": history.action_names()[-20:],
}
finally:
await agent.close()
def _build_llm(self, ChatAnthropic: Any, ChatOpenAI: Any) -> Any:
if self.runtime.model_provider == "zai":
return ChatAnthropic(
model=self.runtime.zai_model,
api_key=self.api_key,
base_url=self.settings.zai_base_url,
timeout=180.0,
)
return ChatOpenAI(
model=self.runtime.local_model,
api_key="lm-studio",
base_url=f"{self.runtime.local_base_url.rstrip('/')}/v1",
timeout=180.0,
)
def _provider_readiness_error(self) -> str | None:
if self.runtime.model_provider == "zai" and not self.api_key.strip():
return "Z.AI API key is not configured."
if self.runtime.model_provider == "local" and not self.runtime.local_base_url.strip():
return "Local model base URL is not configured."
return None
def _compose_task(self, task: str, start_url: str) -> str:
instructions = [
"Work in a real browser on macOS.",
"If the task asks for list extraction, return concise structured text.",
"If a captcha or login wall blocks progress, stop immediately and say that user action is required.",
"Do not click third-party sign-in buttons such as Google, Apple, or GitHub OAuth buttons.",
"Do not open or interact with login popups or OAuth consent windows.",
"If authentication is required, leave the page open in the persistent browser and tell the user to complete login manually, then retry the task.",
"Do not submit irreversible forms or purchases unless the user explicitly asked for it.",
]
if start_url:
instructions.append(f"Start at this URL first: {start_url}")
instructions.append(task)
return "\n".join(instructions)
def _normalize_domains(self, value: object) -> list[str]:
if not isinstance(value, list):
return []
return [str(item).strip() for item in value if str(item).strip()]
def _profile_root(self) -> Path:
profile_root = self.workspace_root / ".wiseclaw" / "browser-use-profile"
profile_root.mkdir(parents=True, exist_ok=True)
(profile_root / "WiseClaw").mkdir(parents=True, exist_ok=True)
return profile_root
async def _ensure_persistent_browser(self) -> str:
state = self._load_browser_state()
if state and self._pid_is_running(int(state.get("pid", 0))):
cdp_url = await self._fetch_cdp_url(int(state["port"]))
if cdp_url:
return cdp_url
await self._launch_persistent_browser()
cdp_url = await self._wait_for_cdp_url()
self._save_browser_state({"pid": self._read_pid_file(), "port": self.debug_port})
return cdp_url
async def _launch_persistent_browser(self) -> None:
executable = str(self.chromium_path if self.chromium_path.exists() else "Chromium")
profile_root = self._profile_root()
args = [
executable,
f"--remote-debugging-port={self.debug_port}",
f"--user-data-dir={profile_root}",
"--profile-directory=WiseClaw",
"--no-first-run",
"--no-default-browser-check",
"--start-maximized",
"about:blank",
]
process = await asyncio.create_subprocess_exec(
*args,
stdout=asyncio.subprocess.DEVNULL,
stderr=asyncio.subprocess.DEVNULL,
start_new_session=True,
)
self._write_pid_file(process.pid)
async def _wait_for_cdp_url(self) -> str:
for _ in range(40):
cdp_url = await self._fetch_cdp_url(self.debug_port)
if cdp_url:
return cdp_url
await asyncio.sleep(0.5)
raise RuntimeError("Persistent Chromium browser did not expose a CDP endpoint in time.")
async def _fetch_cdp_url(self, port: int) -> str:
try:
async with httpx.AsyncClient(timeout=2.0) as client:
response = await client.get(f"http://127.0.0.1:{port}/json/version")
response.raise_for_status()
except httpx.HTTPError:
return ""
payload = response.json()
return str(payload.get("webSocketDebuggerUrl", ""))
def _browser_state_path(self) -> Path:
return self.workspace_root / ".wiseclaw" / "browser-use-browser.json"
def _browser_pid_path(self) -> Path:
return self.workspace_root / ".wiseclaw" / "browser-use-browser.pid"
def _load_browser_state(self) -> dict[str, int] | None:
path = self._browser_state_path()
if not path.exists():
return None
try:
return json.loads(path.read_text(encoding="utf-8"))
except json.JSONDecodeError:
return None
def _save_browser_state(self, payload: dict[str, int]) -> None:
path = self._browser_state_path()
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(json.dumps(payload), encoding="utf-8")
def _write_pid_file(self, pid: int) -> None:
path = self._browser_pid_path()
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(str(pid), encoding="utf-8")
def _read_pid_file(self) -> int:
path = self._browser_pid_path()
if not path.exists():
return 0
try:
return int(path.read_text(encoding="utf-8").strip())
except ValueError:
return 0
def _pid_is_running(self, pid: int) -> bool:
if pid <= 0:
return False
try:
os.kill(pid, 0)
except OSError:
return False
return True

View File

@@ -6,16 +6,100 @@ from app.tools.base import Tool
class FilesTool(Tool): class FilesTool(Tool):
name = "files" name = "files"
description = "Read and write files within allowed paths." description = "Read, list, and write files within the workspace."
def __init__(self, workspace_root: Path) -> None:
self.workspace_root = workspace_root.resolve()
def parameters_schema(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["read", "list", "write"],
"description": "Use read to read a file, list to list a directory, or write to create/update a file.",
},
"path": {
"type": "string",
"description": "Absolute or relative path inside the workspace.",
},
"content": {
"type": "string",
"description": "File content for write operations.",
},
},
"required": ["action", "path"],
"additionalProperties": False,
}
async def run(self, payload: dict[str, Any]) -> dict[str, Any]: async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
action = str(payload.get("action", "read")).strip() action = str(payload.get("action", "read")).strip()
path = Path(str(payload.get("path", "")).strip()).expanduser() raw_path = str(payload.get("path", "")).strip()
path = self._resolve_path(raw_path)
if action == "read":
if not path.exists():
return {"tool": self.name, "status": "error", "message": f"Path not found: {path}"}
if path.is_dir():
return {"tool": self.name, "status": "error", "message": f"Path is a directory: {path}"}
content = path.read_text(encoding="utf-8", errors="replace")
return { return {
"tool": self.name, "tool": self.name,
"status": "stub", "status": "ok",
"action": action, "action": action,
"path": str(path), "path": str(path),
"message": "File integration is not wired yet.", "content": content[:12000],
"truncated": len(content) > 12000,
} }
if action == "list":
if not path.exists():
return {"tool": self.name, "status": "error", "message": f"Path not found: {path}"}
if not path.is_dir():
return {"tool": self.name, "status": "error", "message": f"Path is not a directory: {path}"}
entries = []
for child in sorted(path.iterdir(), key=lambda item: item.name.lower())[:200]:
entries.append(
{
"name": child.name,
"type": "dir" if child.is_dir() else "file",
}
)
return {
"tool": self.name,
"status": "ok",
"action": action,
"path": str(path),
"entries": entries,
"truncated": len(entries) >= 200,
}
if action == "write":
content = str(payload.get("content", ""))
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(content, encoding="utf-8")
return {
"tool": self.name,
"status": "ok",
"action": action,
"path": str(path),
"bytes_written": len(content.encode("utf-8")),
}
return {
"tool": self.name,
"status": "error",
"message": f"Unsupported action: {action}. Allowed actions are read, list, and write.",
}
def _resolve_path(self, raw_path: str) -> Path:
candidate = Path(raw_path).expanduser()
if not candidate.is_absolute():
candidate = (self.workspace_root / candidate).resolve()
else:
candidate = candidate.resolve()
if self.workspace_root not in candidate.parents and candidate != self.workspace_root:
raise ValueError(f"Path is outside the workspace: {candidate}")
return candidate

View File

@@ -0,0 +1,47 @@
from pathlib import Path
from sqlalchemy.orm import Session
from app.config import get_settings
from app.db import SecretORM
from app.models import RuntimeSettings
from app.tools.apple_notes import AppleNotesTool
from app.tools.browser_use import BrowserUseTool
from app.tools.brave_search import BraveSearchTool
from app.tools.files import FilesTool
from app.tools.second_brain import SecondBrainTool
from app.tools.terminal import TerminalTool
from app.tools.web_fetch import WebFetchTool
def build_tools(runtime: RuntimeSettings, workspace_root: Path, session: Session) -> dict[str, object]:
enabled = {tool.name for tool in runtime.tools if tool.enabled}
tools: dict[str, object] = {}
settings = get_settings()
if "files" in enabled:
tools["files"] = FilesTool(workspace_root)
if "apple_notes" in enabled:
tools["apple_notes"] = AppleNotesTool()
if "browser_use" in enabled:
secret = session.get(SecretORM, "zai_api_key")
api_key = secret.value if secret else settings.zai_api_key
tools["browser_use"] = BrowserUseTool(workspace_root, runtime, settings, api_key)
if "brave_search" in enabled and runtime.search_provider == "brave":
secret = session.get(SecretORM, "brave_api_key")
api_key = secret.value if secret else settings.brave_api_key
tools["brave_search"] = BraveSearchTool(api_key)
if "second_brain" in enabled:
secret = session.get(SecretORM, "anythingllm_api_key")
api_key = secret.value if secret else settings.anythingllm_api_key
tools["second_brain"] = SecondBrainTool(
base_url=runtime.anythingllm_base_url,
workspace_slug=runtime.anythingllm_workspace_slug,
api_key=api_key,
)
if "web_fetch" in enabled:
tools["web_fetch"] = WebFetchTool()
if "terminal" in enabled:
tools["terminal"] = TerminalTool(runtime.terminal_mode, workspace_root)
return tools

View File

@@ -0,0 +1,164 @@
from typing import Any
import httpx
from app.tools.base import Tool
class SecondBrainTool(Tool):
name = "second_brain"
description = "Search and retrieve context from the configured AnythingLLM workspace."
def __init__(self, base_url: str, workspace_slug: str, api_key: str) -> None:
self.base_url = base_url.rstrip("/")
self.workspace_slug = workspace_slug.strip().strip("/")
self.api_key = api_key.strip()
def parameters_schema(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The user question to search in the second brain workspace.",
},
"mode": {
"type": "string",
"description": "Workspace chat mode. Prefer query for retrieval-focused lookups.",
"enum": ["query", "chat"],
},
},
"required": ["query"],
"additionalProperties": False,
}
async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
query = str(payload.get("query", "")).strip()
mode = str(payload.get("mode", "query") or "query").strip().lower()
if mode not in {"query", "chat"}:
mode = "query"
if not query:
return {"tool": self.name, "status": "error", "message": "Query is required."}
if not self.base_url:
return {"tool": self.name, "status": "error", "message": "AnythingLLM base URL is not configured."}
if not self.workspace_slug:
return {"tool": self.name, "status": "error", "message": "AnythingLLM workspace slug is not configured."}
if not self.api_key:
return {"tool": self.name, "status": "error", "message": "AnythingLLM API key is not configured."}
endpoint = f"{self.base_url}/api/v1/workspace/{self.workspace_slug}/chat"
instructed_query = self._build_query_prompt(query, mode)
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
}
payload_candidates = [
{
"message": instructed_query,
"mode": mode,
"sessionId": None,
"attachments": [],
},
{
"message": instructed_query,
"mode": "chat",
"sessionId": None,
"attachments": [],
},
{
"message": instructed_query,
"mode": "chat",
},
]
last_error = ""
response = None
try:
async with httpx.AsyncClient(timeout=30.0) as client:
for request_payload in payload_candidates:
response = await client.post(endpoint, headers=headers, json=request_payload)
if response.is_success:
break
last_error = self._format_error(response)
if response.status_code != 400:
response.raise_for_status()
else:
return {
"tool": self.name,
"status": "error",
"query": query,
"workspace_slug": self.workspace_slug,
"message": last_error or "AnythingLLM request failed.",
}
except httpx.HTTPError as exc:
return {
"tool": self.name,
"status": "error",
"query": query,
"workspace_slug": self.workspace_slug,
"message": str(exc),
}
data = response.json() if response is not None else {}
text_response = self._extract_text_response(data)
sources = self._extract_sources(data)
return {
"tool": self.name,
"status": "ok",
"query": query,
"mode": mode,
"workspace_slug": self.workspace_slug,
"context": text_response,
"sources": sources,
"raw": data,
}
def _build_query_prompt(self, query: str, mode: str) -> str:
if mode == "query":
return (
"Only answer the exact question using the workspace context. "
"Do not add commentary, headings, bullets, extra notes, names, or related reminders. "
"If the answer contains a date and place, return only that information in one short sentence. "
"Question: "
f"{query}"
)
return query
def _format_error(self, response: httpx.Response) -> str:
try:
payload = response.json()
except ValueError:
return f"HTTP {response.status_code}"
if isinstance(payload, dict):
for key in ("error", "message"):
value = payload.get(key)
if isinstance(value, str) and value.strip():
return value.strip()
return f"HTTP {response.status_code}"
def _extract_text_response(self, data: Any) -> str:
if isinstance(data, dict):
for key in ("textResponse", "response", "answer", "text", "message"):
value = data.get(key)
if isinstance(value, str) and value.strip():
return value.strip()
return ""
def _extract_sources(self, data: Any) -> list[dict[str, str]]:
if not isinstance(data, dict):
return []
raw_sources = data.get("sources", [])
if not isinstance(raw_sources, list):
return []
sources: list[dict[str, str]] = []
for item in raw_sources[:6]:
if not isinstance(item, dict):
continue
sources.append(
{
"title": str(item.get("title") or item.get("source") or item.get("url") or "").strip(),
"url": str(item.get("url") or "").strip(),
"snippet": str(item.get("text") or item.get("snippet") or item.get("description") or "").strip(),
}
)
return sources

View File

@@ -1,3 +1,6 @@
import asyncio
import subprocess
from pathlib import Path
from typing import Any from typing import Any
from app.security import evaluate_terminal_command from app.security import evaluate_terminal_command
@@ -8,17 +11,115 @@ class TerminalTool(Tool):
name = "terminal" name = "terminal"
description = "Run terminal commands under WiseClaw policy." description = "Run terminal commands under WiseClaw policy."
def __init__(self, terminal_mode: int) -> None: def __init__(self, terminal_mode: int, workspace_root: Path) -> None:
self.terminal_mode = terminal_mode self.terminal_mode = terminal_mode
self.workspace_root = workspace_root.resolve()
def parameters_schema(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "A single shell command. Only safe approved prefixes run automatically.",
},
"background": {
"type": "boolean",
"description": "Run the command in the background for long-lived local servers.",
},
"workdir": {
"type": "string",
"description": "Optional relative workspace directory for the command.",
},
},
"required": ["command"],
"additionalProperties": False,
}
async def run(self, payload: dict[str, Any]) -> dict[str, Any]: async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
command = str(payload.get("command", "")).strip() command = str(payload.get("command", "")).strip()
background = bool(payload.get("background", False))
workdir = self._resolve_workdir(str(payload.get("workdir", "")).strip()) if payload.get("workdir") else self.workspace_root
decision = evaluate_terminal_command(command, self.terminal_mode) decision = evaluate_terminal_command(command, self.terminal_mode)
if decision.decision != "allow":
return { return {
"tool": self.name, "tool": self.name,
"status": "stub", "status": "approval_required" if decision.decision == "approval" else "blocked",
"command": command, "command": command,
"decision": decision.decision, "decision": decision.decision,
"reason": decision.reason, "reason": decision.reason,
} }
if background:
return self._run_background(command, decision.reason, workdir)
try:
process = await asyncio.create_subprocess_shell(
command,
cwd=str(workdir),
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(process.communicate(), timeout=15.0)
except TimeoutError:
return {
"tool": self.name,
"status": "error",
"command": command,
"decision": decision.decision,
"reason": "Command timed out after 15 seconds.",
}
stdout_text = stdout.decode("utf-8", errors="replace")
stderr_text = stderr.decode("utf-8", errors="replace")
return {
"tool": self.name,
"status": "ok" if process.returncode == 0 else "error",
"command": command,
"decision": decision.decision,
"reason": decision.reason,
"workdir": str(workdir),
"exit_code": process.returncode,
"stdout": stdout_text[:12000],
"stderr": stderr_text[:12000],
"stdout_truncated": len(stdout_text) > 12000,
"stderr_truncated": len(stderr_text) > 12000,
}
def _run_background(self, command: str, reason: str, workdir: Path) -> dict[str, Any]:
logs_dir = self.workspace_root / ".wiseclaw" / "logs"
logs_dir.mkdir(parents=True, exist_ok=True)
log_path = logs_dir / f"terminal-{abs(hash((command, str(workdir))))}.log"
log_handle = log_path.open("ab")
process = subprocess.Popen(
command,
cwd=str(workdir),
shell=True,
stdout=log_handle,
stderr=subprocess.STDOUT,
start_new_session=True,
)
log_handle.close()
return {
"tool": self.name,
"status": "ok",
"command": command,
"decision": "allow",
"reason": reason,
"workdir": str(workdir),
"background": True,
"pid": process.pid,
"log_path": str(log_path),
}
def _resolve_workdir(self, raw_path: str) -> Path:
candidate = Path(raw_path).expanduser()
if not candidate.is_absolute():
candidate = (self.workspace_root / candidate).resolve()
else:
candidate = candidate.resolve()
if self.workspace_root not in candidate.parents and candidate != self.workspace_root:
raise ValueError(f"Workdir is outside the workspace: {candidate}")
if not candidate.exists() or not candidate.is_dir():
raise ValueError(f"Workdir is not a directory: {candidate}")
return candidate

View File

@@ -1,5 +1,8 @@
import re
from typing import Any from typing import Any
import httpx
from app.tools.base import Tool from app.tools.base import Tool
@@ -7,12 +10,56 @@ class WebFetchTool(Tool):
name = "web_fetch" name = "web_fetch"
description = "Fetch a webpage and return simplified content." description = "Fetch a webpage and return simplified content."
async def run(self, payload: dict[str, Any]) -> dict[str, Any]: def parameters_schema(self) -> dict[str, Any]:
url = str(payload.get("url", "")).strip()
return { return {
"tool": self.name, "type": "object",
"status": "stub", "properties": {
"url": url, "url": {
"message": "Web fetch integration is not wired yet.", "type": "string",
"description": "The http or https URL to fetch.",
}
},
"required": ["url"],
"additionalProperties": False,
} }
async def run(self, payload: dict[str, Any]) -> dict[str, Any]:
url = str(payload.get("url", "")).strip()
if not url.startswith(("http://", "https://")):
return {
"tool": self.name,
"status": "error",
"url": url,
"message": "Only http and https URLs are allowed.",
}
try:
async with httpx.AsyncClient(timeout=15.0, follow_redirects=True) as client:
response = await client.get(url)
response.raise_for_status()
except httpx.HTTPError as exc:
return {
"tool": self.name,
"status": "error",
"url": url,
"message": str(exc),
}
text = self._simplify_content(response.text)
return {
"tool": self.name,
"status": "ok",
"url": url,
"content_type": response.headers.get("content-type", ""),
"content": text[:12000],
"truncated": len(text) > 12000,
}
def _simplify_content(self, content: str) -> str:
text = re.sub(r"(?is)<script.*?>.*?</script>", " ", content)
text = re.sub(r"(?is)<style.*?>.*?</style>", " ", text)
text = re.sub(r"(?s)<[^>]+>", " ", text)
text = re.sub(r"&nbsp;", " ", text)
text = re.sub(r"&amp;", "&", text)
text = re.sub(r"\s+", " ", text)
return text.strip()

View File

@@ -15,8 +15,9 @@ dependencies = [
"sqlalchemy>=2.0.39,<3.0.0", "sqlalchemy>=2.0.39,<3.0.0",
"httpx>=0.28.0,<1.0.0", "httpx>=0.28.0,<1.0.0",
"python-telegram-bot>=22.0,<23.0", "python-telegram-bot>=22.0,<23.0",
"browser-use>=0.12.2,<1.0.0",
"anthropic>=0.76.0,<1.0.0",
] ]
[tool.setuptools.packages.find] [tool.setuptools.packages.find]
where = ["."] where = ["."]

View File

@@ -5,7 +5,7 @@
WiseClaw uses a single FastAPI process with modular tool adapters: WiseClaw uses a single FastAPI process with modular tool adapters:
- `telegram`: inbound/outbound bot handling and whitelist checks - `telegram`: inbound/outbound bot handling and whitelist checks
- `llm`: Ollama client and simple tool-routing planner - `llm`: LM Studio/OpenAI-compatible client and simple tool-routing planner
- `tools`: search, notes, files, terminal, and fetch tools - `tools`: search, notes, files, terminal, and fetch tools
- `memory`: SQLite-backed short-term and long-term state - `memory`: SQLite-backed short-term and long-term state
- `admin`: REST API for settings, logs, users, and health - `admin`: REST API for settings, logs, users, and health
@@ -24,7 +24,6 @@ WiseClaw uses a single FastAPI process with modular tool adapters:
1. Add SQLAlchemy models and Alembic migrations. 1. Add SQLAlchemy models and Alembic migrations.
2. Replace placeholder services with real SQLite persistence. 2. Replace placeholder services with real SQLite persistence.
3. Wire Telegram webhook or polling loop. 3. Wire Telegram webhook or polling loop.
4. Add Ollama-driven tool calling. 4. Add LM Studio-driven tool calling.
5. Persist secrets in macOS Keychain. 5. Persist secrets in macOS Keychain.
6. Build audit views and approval flows in the admin panel. 6. Build audit views and approval flows in the admin panel.

View File

@@ -0,0 +1,83 @@
---
date: 2026-03-21
topic: model-provider-switch
---
# Model Provider Switch
## What We're Building
WiseClaw admin paneline global bir model sağlayıcı seçimi ekliyoruz. Yönetici ister mevcut yerel LM Studio akışını aktif edecek, ister z.ai sağlayıcısına geçip API key ile `glm-4.7` veya `glm-5` modellerini kullanacak.
Bu seçim tüm yeni istekler için ortak runtime ayarı olacak. Yani Telegram, admin testleri ve backend orkestrasyonu seçili sağlayıcıya göre aynı LLM istemcisini kullanacak.
## Why This Approach
En sade ve güvenli çözüm global provider seçimi. Per-user ya da per-chat seçim şu aşamada gereksiz karmaşıklık getirir; secret yönetimi, UI, audit ve hata ayıklama zorlaşır.
z.ai tarafı OpenAI-uyumlu API sunduğu için mevcut istemci mimarisi çok büyük kırılım olmadan genişletilebilir. Bu da LM Studio ile z.ai arasında ortak bir soyutlama kurmayı mantıklı hale getiriyor.
## Approaches Considered
### Approach A: Tek Global Provider Ayarı
Admin panelde provider seçilir, sadece ilgili alanlar görünür, backend seçili provider'a göre çağrı yapar.
Pros:
- En basit kullanıcı deneyimi
- Backend davranışı öngörülebilir
- Secret ve runtime yönetimi kolay
Cons:
- Aynı anda iki farklı provider kullanılamaz
- Deneysel karşılaştırmalar için manuel geçiş gerekir
Best when: Ürün tek bir aktif model hattı ile çalışacaksa
### Approach B: Global Provider + Manual Override Alanı
Global seçim korunur ama bazı akışlarda provider/model override edilebilir.
Pros:
- Daha esnek
- Test ve karşılaştırma kolaylaşır
Cons:
- UI ve backend karmaşıklığı artar
- Hangi isteğin hangi modelle çalıştığı daha az net olur
Best when: Kısa vadede A/B model denemesi yapılacaksa
### Approach C: Ayrı Provider Sekmeleri ve Bağımsız Konfigürasyonlar
Hem local hem z.ai ayarları hep görünür, ama aktif flag ayrı tutulur.
Pros:
- Tüm ayarlar tek ekranda görünür
- Geçişler hızlı olur
Cons:
- UI kalabalıklaşır
- İlk sürüm için gereğinden fazla yapı
Best when: Sık sağlayıcı değişimi bekleniyorsa
## Recommendation
Approach A.
İlk sürüm için en doğru yol bu. Admin panelde:
- `Model Provider`: `local` / `zai`
- `local` seçiliyken: base URL + local model
- `zai` seçiliyken: API key + model dropdown (`glm-4.7`, `glm-5`)
Backend tarafında ortak bir LLM gateway oluşturulmalı. Seçili provider'a göre:
- Local: mevcut LM Studio/OpenAI-compatible endpoint
- Z.AI: z.ai OpenAI-compatible endpoint + bearer/api key
## Key Decisions
- Provider seçimi global olacak: sistem davranışı tek bir aktif modele bağlı kalacak.
- z.ai API key secret olarak saklanacak: normal runtime settings içine düz yazı olarak girmeyecek.
- z.ai model listesi ilk aşamada sabit olacak: `glm-4.7` ve `glm-5`.
- UI conditional olacak: sadece seçili provider'ın alanları gösterilecek.
- Backend provider-aware olacak: mevcut `ollama_base_url/default_model` yaklaşımı daha genel `provider/base_url/model` yapısına genişletilecek.
## Open Questions
- z.ai için sabit bir base URL kullanıp UI'da göstermeyelim mi, yoksa readonly/default bir alan olarak mı gösterelim?
- `glm-4.7` ve `glm-5` dışında gelecekte serbest model adı girişi de desteklenecek mi?
## Next Steps
- `/workflows:plan` seviyesinde implementasyon planına geç

View File

@@ -0,0 +1,33 @@
---
date: 2026-03-22
topic: telegram-onboarding
---
# Telegram Onboarding
## What We're Building
WiseClaw'a Telegram üzerinden `/tanışalım` komutu ile başlayan, 12 soruluk kalıcı bir onboarding sohbeti ekliyoruz. Bu akış kullanıcının adı, kullanım amacı, ton tercihi, dil tercihi, yanıt uzunluğu, çalışma biçimi ve sınırları gibi bilgileri toplar.
Toplanan veriler geçici hafızada değil, SQLite içinde yapılandırılmış bir kullanıcı profili olarak saklanır. Böylece sunucu yeniden başlasa bile WiseClaw aynı kullanıcıyla aynı üslupta konuşmaya devam eder.
## Why This Approach
Alternatif olarak cevapları yalnızca genel memory tablosuna yazmak mümkündü, ancak bu yaklaşım dağınık, kırılgan ve güncellemesi zor olurdu. Ayrı profil + onboarding state modeli daha güvenilir, sorgulanabilir ve kişiselleştirme için daha uygundur.
## Key Decisions
- `/tanışalım` Telegram komutu olacak: onboarding yalnızca istek üzerine veya ilk temas senaryosunda başlatılacak.
- 12 soru tek tek sorulacak: uzun form yerine sohbet hissi korunacak.
- Her cevap anında kaydedilecek: yarıda kalırsa kaldığı yerden devam edilebilecek.
- Veriler ayrı kullanıcı profili tablosunda tutulacak: kalıcı kişiselleştirme için.
- Prompt'a structured profile enjekte edilecek: ton, dil, uzunluk ve çalışma tercihi her cevapta uygulanacak.
- Kısa profil özeti ayrıca memory'ye yazılabilecek: ama asıl kaynak structured profile olacak.
## Open Questions
- İlk mesajda onboarding otomatik mi tetiklensin, yoksa sadece `/tanışalım` ile mi başlasın?
- Admin panelde profil düzenleme ilk sürüme dahil edilsin mi, yoksa yalnızca Telegram komutları yeterli mi?
## Next Steps
- Veri modelini ve onboarding state yapısını ekle
- Telegram command akışını oluştur
- Orchestrator içine onboarding interception ekle
- Prompt kişiselleştirme katmanını bağla
- `/profilim`, `/tercihlerim`, `/tanışalım_sifirla` yardımcı komutlarını ekle

View File

@@ -2,21 +2,29 @@ import { FormEvent, useEffect, useState } from "react";
import { api } from "./api"; import { api } from "./api";
import type { import type {
AutomationRecord,
DashboardSnapshot, DashboardSnapshot,
MemoryRecord, MemoryRecord,
OllamaStatus, OllamaStatus,
RuntimeSettings, RuntimeSettings,
TelegramStatus, TelegramStatus,
UserProfileRecord,
UserRecord, UserRecord,
} from "./types"; } from "./types";
const defaultSettings: RuntimeSettings = { const defaultSettings: RuntimeSettings = {
terminal_mode: 3, terminal_mode: 3,
search_provider: "brave", search_provider: "brave",
ollama_base_url: "http://127.0.0.1:11434", model_provider: "local",
default_model: "qwen3.5:4b", local_base_url: "http://127.0.0.1:1234",
local_model: "qwen3-vl-8b-instruct-mlx@5bit",
zai_model: "glm-5",
anythingllm_base_url: "http://127.0.0.1:3001",
anythingllm_workspace_slug: "wiseclaw",
tools: [ tools: [
{ name: "brave_search", enabled: true }, { name: "brave_search", enabled: true },
{ name: "second_brain", enabled: true },
{ name: "browser_use", enabled: true },
{ name: "searxng_search", enabled: false }, { name: "searxng_search", enabled: false },
{ name: "web_fetch", enabled: true }, { name: "web_fetch", enabled: true },
{ name: "apple_notes", enabled: true }, { name: "apple_notes", enabled: true },
@@ -29,12 +37,25 @@ export function App() {
const [dashboard, setDashboard] = useState<DashboardSnapshot | null>(null); const [dashboard, setDashboard] = useState<DashboardSnapshot | null>(null);
const [settings, setSettings] = useState<RuntimeSettings>(defaultSettings); const [settings, setSettings] = useState<RuntimeSettings>(defaultSettings);
const [users, setUsers] = useState<UserRecord[]>([]); const [users, setUsers] = useState<UserRecord[]>([]);
const [profiles, setProfiles] = useState<UserProfileRecord[]>([]);
const [automations, setAutomations] = useState<AutomationRecord[]>([]);
const [memory, setMemory] = useState<MemoryRecord[]>([]); const [memory, setMemory] = useState<MemoryRecord[]>([]);
const [secretMask, setSecretMask] = useState(""); const [secretMask, setSecretMask] = useState("");
const [secretValue, setSecretValue] = useState(""); const [secretValue, setSecretValue] = useState("");
const [zaiSecretMask, setZaiSecretMask] = useState("");
const [zaiSecretValue, setZaiSecretValue] = useState("");
const [anythingSecretMask, setAnythingSecretMask] = useState("");
const [anythingSecretValue, setAnythingSecretValue] = useState("");
const [ollamaStatus, setOllamaStatus] = useState<OllamaStatus | null>(null); const [ollamaStatus, setOllamaStatus] = useState<OllamaStatus | null>(null);
const [telegramStatus, setTelegramStatus] = useState<TelegramStatus | null>(null); const [telegramStatus, setTelegramStatus] = useState<TelegramStatus | null>(null);
const [status, setStatus] = useState("Loading WiseClaw admin..."); const [status, setStatus] = useState("Loading WiseClaw admin...");
const providerLabel = settings.model_provider === "local" ? "Local (LM Studio)" : "Z.AI";
const searchProviderLabel = settings.search_provider === "brave" ? "Brave" : "SearXNG";
const llmStatusLabel = settings.model_provider === "local" ? "LM Studio status" : "Z.AI status";
const llmStatusHint =
settings.model_provider === "local"
? "Checking local model endpoint..."
: "Checking remote Z.AI endpoint...";
useEffect(() => { useEffect(() => {
void load(); void load();
@@ -42,21 +63,29 @@ export function App() {
async function load() { async function load() {
try { try {
const [dashboardData, settingsData, userData, memoryData, secretData, ollamaData, telegramData] = const [dashboardData, settingsData, userData, profileData, automationData, memoryData, secretData, zaiSecretData, anythingSecretData, ollamaData, telegramData] =
await Promise.all([ await Promise.all([
api.getDashboard(), api.getDashboard(),
api.getSettings(), api.getSettings(),
api.getUsers(), api.getUsers(),
api.getProfiles(),
api.getAutomations(),
api.getMemory(), api.getMemory(),
api.getSecretMask("brave_api_key"), api.getSecretMask("brave_api_key"),
api.getSecretMask("zai_api_key"),
api.getSecretMask("anythingllm_api_key"),
api.getOllamaStatus(), api.getOllamaStatus(),
api.getTelegramStatus(), api.getTelegramStatus(),
]); ]);
setDashboard(dashboardData); setDashboard(dashboardData);
setSettings(settingsData); setSettings(settingsData);
setUsers(userData); setUsers(userData);
setProfiles(profileData);
setAutomations(automationData);
setMemory(memoryData); setMemory(memoryData);
setSecretMask(secretData.masked); setSecretMask(secretData.masked);
setZaiSecretMask(zaiSecretData.masked);
setAnythingSecretMask(anythingSecretData.masked);
setOllamaStatus(ollamaData); setOllamaStatus(ollamaData);
setTelegramStatus(telegramData); setTelegramStatus(telegramData);
setStatus("WiseClaw admin ready."); setStatus("WiseClaw admin ready.");
@@ -84,6 +113,28 @@ export function App() {
await load(); await load();
} }
async function handleZaiSecretSubmit(event: FormEvent) {
event.preventDefault();
if (!zaiSecretValue.trim()) {
return;
}
await api.saveSecret("zai_api_key", zaiSecretValue.trim());
setZaiSecretValue("");
setStatus("Z.AI API key updated.");
await load();
}
async function handleAnythingSecretSubmit(event: FormEvent) {
event.preventDefault();
if (!anythingSecretValue.trim()) {
return;
}
await api.saveSecret("anythingllm_api_key", anythingSecretValue.trim());
setAnythingSecretValue("");
setStatus("AnythingLLM API key updated.");
await load();
}
async function handleAddUser(event: FormEvent<HTMLFormElement>) { async function handleAddUser(event: FormEvent<HTMLFormElement>) {
event.preventDefault(); event.preventDefault();
const form = new FormData(event.currentTarget); const form = new FormData(event.currentTarget);
@@ -133,7 +184,7 @@ export function App() {
</div> </div>
<div> <div>
<span>Model</span> <span>Model</span>
<strong>{settings.default_model}</strong> <strong>{settings.model_provider === "local" ? settings.local_model : settings.zai_model}</strong>
</div> </div>
</div> </div>
</aside> </aside>
@@ -151,21 +202,21 @@ export function App() {
</div> </div>
<div> <div>
<span>Search provider</span> <span>Search provider</span>
<strong>{settings.search_provider}</strong> <strong>{searchProviderLabel}</strong>
</div> </div>
<div> <div>
<span>Ollama</span> <span>Provider</span>
<strong>{settings.ollama_base_url}</strong> <strong>{providerLabel}</strong>
</div> </div>
</div> </div>
<div className="integration-grid"> <div className="integration-grid">
<div className="integration-card"> <div className="integration-card">
<span>Ollama status</span> <span>{llmStatusLabel}:</span>
<strong>{ollamaStatus?.reachable ? "Reachable" : "Offline"}</strong> <strong>{ollamaStatus?.reachable ? "Reachable" : "Offline"}</strong>
<p>{ollamaStatus?.message || "Checking..."}</p> <p>{ollamaStatus?.message || llmStatusHint}</p>
</div> </div>
<div className="integration-card"> <div className="integration-card">
<span>Telegram status</span> <span>Telegram status:</span>
<strong>{telegramStatus?.configured ? "Configured" : "Missing token"}</strong> <strong>{telegramStatus?.configured ? "Configured" : "Missing token"}</strong>
<p>{telegramStatus?.message || "Checking..."}</p> <p>{telegramStatus?.message || "Checking..."}</p>
</div> </div>
@@ -196,6 +247,22 @@ export function App() {
</select> </select>
</label> </label>
<label>
Model provider
<select
value={settings.model_provider}
onChange={(event) =>
setSettings({
...settings,
model_provider: event.target.value as "local" | "zai",
})
}
>
<option value="local">Local (LM Studio)</option>
<option value="zai">Z.AI</option>
</select>
</label>
<label> <label>
Search provider Search provider
<select <select
@@ -213,21 +280,59 @@ export function App() {
</label> </label>
<label> <label>
Ollama base URL AnythingLLM base URL
<input <input
value={settings.ollama_base_url} value={settings.anythingllm_base_url}
onChange={(event) => setSettings({ ...settings, ollama_base_url: event.target.value })} onChange={(event) => setSettings({ ...settings, anythingllm_base_url: event.target.value })}
placeholder="http://127.0.0.1:3001"
/> />
</label> </label>
<label> <label>
Default model AnythingLLM workspace slug
<input <input
value={settings.default_model} value={settings.anythingllm_workspace_slug}
onChange={(event) => setSettings({ ...settings, default_model: event.target.value })} onChange={(event) => setSettings({ ...settings, anythingllm_workspace_slug: event.target.value })}
placeholder="wiseclaw"
/> />
</label> </label>
{settings.model_provider === "local" ? (
<>
<label>
LM Studio base URL
<input
value={settings.local_base_url}
onChange={(event) => setSettings({ ...settings, local_base_url: event.target.value })}
/>
</label>
<label>
Local model
<input
value={settings.local_model}
onChange={(event) => setSettings({ ...settings, local_model: event.target.value })}
/>
</label>
</>
) : (
<>
<p className="muted">Z.AI uses the fixed hosted API endpoint and the API key saved below.</p>
<label>
Z.AI model
<select
value={settings.zai_model}
onChange={(event) =>
setSettings({ ...settings, zai_model: event.target.value as "glm-4.7" | "glm-5" })
}
>
<option value="glm-4.7">glm-4.7</option>
<option value="glm-5">glm-5</option>
</select>
</label>
</>
)}
<div className="tool-list"> <div className="tool-list">
{settings.tools.map((tool) => ( {settings.tools.map((tool) => (
<label key={tool.name} className="checkbox-row"> <label key={tool.name} className="checkbox-row">
@@ -250,7 +355,7 @@ export function App() {
</form> </form>
<div className="stack"> <div className="stack">
<form className="panel" onSubmit={handleSecretSubmit}> <form className="panel secret-panel" onSubmit={handleSecretSubmit}>
<div className="panel-head"> <div className="panel-head">
<h3>Secrets</h3> <h3>Secrets</h3>
<button type="submit">Update</button> <button type="submit">Update</button>
@@ -267,6 +372,40 @@ export function App() {
</label> </label>
</form> </form>
<form className="panel secret-panel" onSubmit={handleZaiSecretSubmit}>
<div className="panel-head">
<h3>Z.AI Secret</h3>
<button type="submit">Update</button>
</div>
<p className="muted">Current Z.AI key: {zaiSecretMask || "not configured"}</p>
<label>
Z.AI API key
<input
type="password"
value={zaiSecretValue}
onChange={(event) => setZaiSecretValue(event.target.value)}
placeholder="Paste a new key"
/>
</label>
</form>
<form className="panel secret-panel" onSubmit={handleAnythingSecretSubmit}>
<div className="panel-head">
<h3>AnythingLLM Secret</h3>
<button type="submit">Update</button>
</div>
<p className="muted">Current AnythingLLM key: {anythingSecretMask || "not configured"}</p>
<label>
AnythingLLM API key
<input
type="password"
value={anythingSecretValue}
onChange={(event) => setAnythingSecretValue(event.target.value)}
placeholder="Paste a new key"
/>
</label>
</form>
<form className="panel" onSubmit={handleAddUser}> <form className="panel" onSubmit={handleAddUser}>
<div className="panel-head"> <div className="panel-head">
<h3>Telegram Whitelist</h3> <h3>Telegram Whitelist</h3>
@@ -297,6 +436,75 @@ export function App() {
</div> </div>
</section> </section>
<section className="grid two-up">
<div className="panel compact-fixed-panel">
<div className="panel-head">
<h3>User Profiles</h3>
</div>
<div className="list compact-scroll-list">
{profiles.length === 0 ? <span className="muted">No onboarding profiles yet.</span> : null}
{profiles.map((profile) => (
<div key={profile.telegram_user_id} className="list-row">
<strong>
{profile.display_name || `User ${profile.telegram_user_id}`} ·{" "}
{profile.onboarding_completed
? "Onboarding complete"
: `Step ${profile.last_onboarding_step + 1}/12`}
</strong>
<div>Telegram ID: {profile.telegram_user_id}</div>
<div>Ton: {profile.tone_preference || "belirtilmedi"}</div>
<div>Dil: {profile.language_preference || "belirtilmedi"}</div>
<div>Cevap uzunluğu: {profile.response_length || "belirtilmedi"}</div>
<div>Çalışma biçimi: {profile.workflow_preference || "belirtilmedi"}</div>
<div>
Kullanım amacı: {profile.primary_use_cases.length ? profile.primary_use_cases.join(", ") : "belirtilmedi"}
</div>
<div>
Öncelikler: {profile.answer_priorities.length ? profile.answer_priorities.join(", ") : "belirtilmedi"}
</div>
<div>
İlgi alanları: {profile.interests.length ? profile.interests.join(", ") : "belirtilmedi"}
</div>
<div>
Onay beklentileri:{" "}
{profile.approval_preferences.length ? profile.approval_preferences.join(", ") : "belirtilmedi"}
</div>
<div>Kaçınılacaklar: {profile.avoid_preferences || "belirtilmedi"}</div>
</div>
))}
</div>
</div>
<div className="panel compact-fixed-panel">
<div className="panel-head">
<h3>Automations</h3>
</div>
<div className="list compact-scroll-list">
{automations.length === 0 ? <span className="muted">No automations yet.</span> : null}
{automations.map((automation) => (
<div key={automation.id} className="list-row automation-row">
<strong>
#{automation.id} {automation.name} · {automation.status}
</strong>
<div>Telegram ID: {automation.telegram_user_id}</div>
<div>Prompt: {automation.prompt}</div>
<div>
Schedule:{" "}
{automation.schedule_type === "hourly"
? `every ${automation.interval_hours || 1} hour(s)`
: automation.schedule_type}
</div>
{automation.time_of_day ? <div>Time: {automation.time_of_day}</div> : null}
{automation.days_of_week.length ? <div>Days: {automation.days_of_week.join(", ")}</div> : null}
<div>Next run: {automation.next_run_at || "not scheduled"}</div>
<div>Last run: {automation.last_run_at || "never"}</div>
<div>Last result: {automation.last_result || "no result yet"}</div>
</div>
))}
</div>
</div>
</section>
<section className="grid two-up"> <section className="grid two-up">
<div className="panel"> <div className="panel">
<div className="panel-head"> <div className="panel-head">
@@ -305,7 +513,7 @@ export function App() {
Clear Clear
</button> </button>
</div> </div>
<div className="list"> <div className="list scroll-list">
{memory.length === 0 ? <span className="muted">No memory yet.</span> : null} {memory.length === 0 ? <span className="muted">No memory yet.</span> : null}
{memory.map((item, index) => ( {memory.map((item, index) => (
<div key={`${item.id}-${index}`} className="list-row"> <div key={`${item.id}-${index}`} className="list-row">
@@ -320,7 +528,7 @@ export function App() {
<div className="panel-head"> <div className="panel-head">
<h3>Recent Logs</h3> <h3>Recent Logs</h3>
</div> </div>
<div className="list"> <div className="list scroll-list">
{(dashboard?.recent_logs || []).length === 0 ? ( {(dashboard?.recent_logs || []).length === 0 ? (
<span className="muted">No recent logs.</span> <span className="muted">No recent logs.</span>
) : null} ) : null}

View File

@@ -1,13 +1,15 @@
import type { import type {
AutomationRecord,
DashboardSnapshot, DashboardSnapshot,
MemoryRecord, MemoryRecord,
OllamaStatus, OllamaStatus,
RuntimeSettings, RuntimeSettings,
TelegramStatus, TelegramStatus,
UserProfileRecord,
UserRecord, UserRecord,
} from "./types"; } from "./types";
const API_BASE = "http://127.0.0.1:8000"; const API_BASE = `${window.location.protocol}//${window.location.hostname}:8000`;
async function request<T>(path: string, init?: RequestInit): Promise<T> { async function request<T>(path: string, init?: RequestInit): Promise<T> {
const response = await fetch(`${API_BASE}${path}`, { const response = await fetch(`${API_BASE}${path}`, {
@@ -33,6 +35,8 @@ export const api = {
body: JSON.stringify(payload), body: JSON.stringify(payload),
}), }),
getUsers: () => request<UserRecord[]>("/admin/users"), getUsers: () => request<UserRecord[]>("/admin/users"),
getProfiles: () => request<UserProfileRecord[]>("/admin/profiles"),
getAutomations: () => request<AutomationRecord[]>("/admin/automations"),
addUser: (payload: UserRecord) => addUser: (payload: UserRecord) =>
request<UserRecord>("/admin/users", { request<UserRecord>("/admin/users", {
method: "POST", method: "POST",
@@ -49,6 +53,6 @@ export const api = {
method: "POST", method: "POST",
body: JSON.stringify({ key, value }), body: JSON.stringify({ key, value }),
}), }),
getOllamaStatus: () => request<OllamaStatus>("/admin/integrations/ollama"), getOllamaStatus: () => request<OllamaStatus>("/admin/integrations/llm"),
getTelegramStatus: () => request<TelegramStatus>("/admin/integrations/telegram"), getTelegramStatus: () => request<TelegramStatus>("/admin/integrations/telegram"),
}; };

View File

@@ -120,6 +120,7 @@ label {
padding: 2rem; padding: 2rem;
display: grid; display: grid;
gap: 1.4rem; gap: 1.4rem;
min-width: 0;
} }
.panel { .panel {
@@ -129,6 +130,22 @@ label {
padding: 1.2rem; padding: 1.2rem;
backdrop-filter: blur(10px); backdrop-filter: blur(10px);
box-shadow: 0 20px 60px rgba(72, 64, 39, 0.08); box-shadow: 0 20px 60px rgba(72, 64, 39, 0.08);
min-width: 0;
overflow: hidden;
}
.fixed-log-panel {
display: grid;
grid-template-rows: auto minmax(0, 1fr);
height: calc(80 * 1.4em + 5.5rem);
align-self: start;
}
.compact-fixed-panel {
display: grid;
grid-template-rows: auto minmax(0, 1fr);
height: 600px;
align-self: start;
} }
.hero { .hero {
@@ -161,6 +178,15 @@ label {
border: 1px solid rgba(31, 92, 102, 0.12); border: 1px solid rgba(31, 92, 102, 0.12);
} }
.integration-card span,
.integration-card strong {
display: inline;
}
.integration-card strong {
margin-left: 0.3rem;
}
.integration-card p { .integration-card p {
margin-bottom: 0; margin-bottom: 0;
color: #4f5b57; color: #4f5b57;
@@ -170,11 +196,36 @@ label {
display: grid; display: grid;
grid-template-columns: repeat(2, minmax(0, 1fr)); grid-template-columns: repeat(2, minmax(0, 1fr));
gap: 1.4rem; gap: 1.4rem;
min-width: 0;
align-items: start;
} }
.stack { .stack {
display: grid; display: grid;
gap: 1.4rem; gap: 1.4rem;
min-width: 0;
}
.secret-panel {
padding-top: 0.64rem;
padding-bottom: 0.64rem;
}
.secret-panel .panel-head {
margin-bottom: 0.24rem;
}
.secret-panel label {
gap: 0.2rem;
}
.secret-panel .muted {
margin-bottom: 0.1rem;
}
.secret-panel form,
.secret-panel {
gap: 0.36rem;
} }
.panel-head { .panel-head {
@@ -190,6 +241,7 @@ label {
form { form {
display: grid; display: grid;
gap: 0.9rem; gap: 0.9rem;
min-width: 0;
} }
.checkbox-row { .checkbox-row {
@@ -216,11 +268,103 @@ form {
} }
.list-row { .list-row {
padding: 0.8rem 0.9rem; padding: 0.65rem 0.75rem;
border-radius: 18px; border-radius: 18px;
background: rgba(31, 36, 33, 0.05); background: rgba(31, 36, 33, 0.05);
font-family: "IBM Plex Mono", "SF Mono", monospace; font-family: "IBM Plex Mono", "SF Mono", monospace;
font-size: 0.9rem; font-size: 0.84rem;
line-height: 1.28;
min-width: 0;
max-width: 100%;
overflow-wrap: anywhere;
word-break: break-word;
white-space: pre-wrap;
}
.list {
min-width: 0;
}
.list .list-row strong {
display: block;
margin-bottom: 0.18rem;
}
.automation-row {
height: 250px;
overflow-y: auto;
align-content: start;
scrollbar-width: thin;
scrollbar-color: rgba(31, 92, 102, 0.72) rgba(233, 196, 106, 0.2);
}
.scroll-list {
height: calc(80 * 1.4em);
max-height: calc(80 * 1.4em);
overflow-y: auto;
overflow-x: hidden;
align-content: start;
padding-right: 0.35rem;
scrollbar-width: thin;
scrollbar-color: rgba(31, 92, 102, 0.72) rgba(233, 196, 106, 0.2);
}
.compact-scroll-list {
min-height: 0;
overflow-y: auto;
overflow-x: hidden;
padding-right: 0.35rem;
scrollbar-width: thin;
scrollbar-color: rgba(31, 92, 102, 0.72) rgba(233, 196, 106, 0.2);
}
.scroll-list::-webkit-scrollbar {
width: 12px;
}
.compact-scroll-list::-webkit-scrollbar {
width: 12px;
}
.scroll-list::-webkit-scrollbar-track {
background: rgba(233, 196, 106, 0.18);
border-radius: 999px;
}
.compact-scroll-list::-webkit-scrollbar-track {
background: rgba(233, 196, 106, 0.18);
border-radius: 999px;
}
.scroll-list::-webkit-scrollbar-thumb {
background: linear-gradient(180deg, rgba(31, 122, 140, 0.88), rgba(31, 92, 102, 0.72));
border-radius: 999px;
border: 2px solid rgba(255, 250, 242, 0.9);
}
.compact-scroll-list::-webkit-scrollbar-thumb {
background: linear-gradient(180deg, rgba(31, 122, 140, 0.88), rgba(31, 92, 102, 0.72));
border-radius: 999px;
border: 2px solid rgba(255, 250, 242, 0.9);
}
.automation-row::-webkit-scrollbar {
width: 10px;
}
.automation-row::-webkit-scrollbar-track {
background: rgba(233, 196, 106, 0.18);
border-radius: 999px;
}
.automation-row::-webkit-scrollbar-thumb {
background: linear-gradient(180deg, rgba(31, 122, 140, 0.88), rgba(31, 92, 102, 0.72));
border-radius: 999px;
border: 2px solid rgba(255, 250, 242, 0.9);
}
.scroll-list::-webkit-scrollbar-thumb:hover {
background: linear-gradient(180deg, rgba(31, 122, 140, 1), rgba(31, 92, 102, 0.88));
} }
@media (max-width: 960px) { @media (max-width: 960px) {

View File

@@ -6,8 +6,12 @@ export type ToolToggle = {
export type RuntimeSettings = { export type RuntimeSettings = {
terminal_mode: 1 | 2 | 3; terminal_mode: 1 | 2 | 3;
search_provider: "brave" | "searxng"; search_provider: "brave" | "searxng";
ollama_base_url: string; model_provider: "local" | "zai";
default_model: string; local_base_url: string;
local_model: string;
zai_model: "glm-4.7" | "glm-5";
anythingllm_base_url: string;
anythingllm_workspace_slug: string;
tools: ToolToggle[]; tools: ToolToggle[];
}; };
@@ -25,6 +29,41 @@ export type UserRecord = {
is_active: boolean; is_active: boolean;
}; };
export type UserProfileRecord = {
telegram_user_id: number;
display_name?: string | null;
bio?: string | null;
occupation?: string | null;
primary_use_cases: string[];
answer_priorities: string[];
tone_preference?: string | null;
response_length?: string | null;
language_preference?: string | null;
workflow_preference?: string | null;
interests: string[];
approval_preferences: string[];
avoid_preferences?: string | null;
onboarding_completed: boolean;
last_onboarding_step: number;
};
export type AutomationRecord = {
id: number;
telegram_user_id: number;
name: string;
prompt: string;
schedule_type: "daily" | "weekdays" | "weekly" | "hourly";
interval_hours?: number | null;
time_of_day?: string | null;
days_of_week: string[];
status: "active" | "paused";
last_run_at?: string | null;
next_run_at?: string | null;
last_result?: string | null;
created_at: string;
updated_at: string;
};
export type MemoryRecord = { export type MemoryRecord = {
id: number; id: number;
content: string; content: string;
@@ -34,6 +73,7 @@ export type MemoryRecord = {
export type OllamaStatus = { export type OllamaStatus = {
reachable: boolean; reachable: boolean;
provider: "local" | "zai";
base_url: string; base_url: string;
model: string; model: string;
installed_models: string[]; installed_models: string[];

58
restart.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/bin/zsh
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "$0")" && pwd)"
BACKEND_DIR="$ROOT_DIR/backend"
LOG_DIR="$ROOT_DIR/.wiseclaw/logs"
PID_FILE="$ROOT_DIR/.wiseclaw/backend.pid"
LOG_FILE="$LOG_DIR/backend.log"
HEALTH_URL="http://127.0.0.1:8000/health"
mkdir -p "$LOG_DIR"
stop_existing() {
if [[ -f "$PID_FILE" ]]; then
local old_pid
old_pid="$(cat "$PID_FILE" 2>/dev/null || true)"
if [[ -n "${old_pid:-}" ]] && kill -0 "$old_pid" 2>/dev/null; then
kill "$old_pid" 2>/dev/null || true
sleep 1
fi
rm -f "$PID_FILE"
fi
pkill -f "uvicorn app.main:app --host 0.0.0.0 --port 8000" >/dev/null 2>&1 || true
}
start_backend() {
(
cd "$BACKEND_DIR"
exec /bin/zsh -lc 'set -a; source .env >/dev/null 2>&1; exec .venv312/bin/python -m uvicorn app.main:app --host 0.0.0.0 --port 8000'
) >"$LOG_FILE" 2>&1 &
echo $! > "$PID_FILE"
}
wait_for_health() {
local attempt
for attempt in {1..20}; do
if curl -fsS "$HEALTH_URL" >/dev/null 2>&1; then
return 0
fi
sleep 1
done
return 1
}
stop_existing
start_backend
if wait_for_health; then
echo "WiseClaw backend restarted."
echo "PID: $(cat "$PID_FILE")"
echo "Log: $LOG_FILE"
exit 0
fi
echo "WiseClaw backend failed to start. Check log: $LOG_FILE" >&2
exit 1