A lightweight Telegram bot that bridges Telegram to Cortex-M
- Python 98.8%
- Dockerfile 1.2%
|
|
||
|---|---|---|
| .github | ||
| .gitignore | ||
| connector.py | ||
| docker-compose.yml | ||
| Dockerfile | ||
| README.md | ||
| requirements.txt | ||
cortex-m-telegram-connector
A lightweight Telegram bot that bridges Telegram to Cortex-M using the Connector Protocol.
Users send messages to the Telegram bot; the connector forwards them to Cortex-M over a persistent WebSocket and replies with the assistant's response.
How it works
Telegram user
│ (message)
▼
python-telegram-bot (polling)
│ assistant.message.inbound CloudEvent
▼
Cortex-M WebSocket (/api/cortex-m/v1/connector/<sessionId>)
│ assistant.message.outbound CloudEvent
▼
python-telegram-bot
│ (reply)
▼
Telegram user
- On startup the connector calls
GET /connectorto obtain a session UUID. - It opens a WebSocket to
/connector/<sessionId>. - Every Telegram text message is wrapped in a
assistant.message.inboundCloudEvent 1.0 and sent over the WebSocket. - Cortex-M replies with an
assistant.message.outboundCloudEvent; the connector forwards the text back to the Telegram chat. - The WebSocket is kept alive and reconnected automatically on failure (exponential backoff, 2 s → 60 s).
Requirements
- Python 3.12+
- A Telegram bot token from @BotFather
- A running Cortex-M instance
Configuration
All configuration is via environment variables.
| Variable | Required | Default | Description |
|---|---|---|---|
TELEGRAM_TOKEN |
✅ | — | Telegram bot token from BotFather |
TELEGRAM_ALLOWLIST |
❌ | — | Comma-separated list of allowed user IDs or usernames (empty = allow all) |
CORTEX_M_URL |
✅ | — | Base URL of the Cortex-M API, e.g. http://cortex-m:8080/api/cortex-m/v1 |
CONNECTOR_ID |
❌ | telegram-1 |
Stable identifier for this connector instance (appears in CloudEvents source) |
CORTEX_M_TIMEOUT |
❌ | 180 |
Seconds to wait for a Cortex-M reply before giving up (default: 3 minutes) |
Running with Docker Compose
Create a .env file:
TELEGRAM_TOKEN=123456:ABC-DEF...
CORTEX_M_URL=http://cortex-m:8080/api/cortex-m/v1
CONNECTOR_ID=telegram-1
Then start the connector:
docker compose up -d
The docker-compose.yml references the pre-built image from GHCR. To build locally instead, replace image: with build: . or run:
docker compose up -d --build
Running locally (without Docker)
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
export TELEGRAM_TOKEN=123456:ABC-DEF...
export CORTEX_M_URL=http://localhost:8080/api/cortex-m/v1
export CONNECTOR_ID=telegram-1
python connector.py
Docker image
Pre-built images are published to GitHub Container Registry on every push to main and on version tags.
# Latest main branch build
docker pull ghcr.io/workaround-org/cortex-m-telegram:main
# Specific version
docker pull ghcr.io/workaround-org/cortex-m-telegram:v1.2.3
Dependencies
| Package | Purpose |
|---|---|
python-telegram-bot |
Async Telegram Bot API client |
websockets |
WebSocket client for Cortex-M |
httpx |
HTTP client for session token request |
See also
- Connector Protocol — WebSocket message format and lifecycle