A lightweight Telegram bot that bridges Telegram to Cortex-M
  • Python 98.8%
  • Dockerfile 1.2%
Find a file
Miggi 12c8ec0ec7
Some checks failed
Build & Push Docker Image / build (push) Has been cancelled
Add broadcast feature
2026-02-28 22:46:34 +01:00
.github Add configurable CORTEX_M_TIMEOUT and Copilot instructions 2026-02-28 11:56:21 +01:00
.gitignore 👷 Add multi-arch support (ARM64 & AMD64) to CI workflow 2026-02-26 09:33:17 +01:00
connector.py Add broadcast feature 2026-02-28 22:46:34 +01:00
docker-compose.yml 🎉 Initial commit: Telegram → Cortex-M connector 2026-02-25 10:34:41 +01:00
Dockerfile 🎉 Initial commit: Telegram → Cortex-M connector 2026-02-25 10:34:41 +01:00
README.md Add configurable CORTEX_M_TIMEOUT and Copilot instructions 2026-02-28 11:56:21 +01:00
requirements.txt 🐛 Fix Markdown rendering via HTML mode 2026-02-28 12:45:18 +01:00

cortex-m-telegram-connector

A lightweight Telegram bot that bridges Telegram to Cortex-M using the Connector Protocol.

Users send messages to the Telegram bot; the connector forwards them to Cortex-M over a persistent WebSocket and replies with the assistant's response.


How it works

Telegram user
     │  (message)
     ▼
python-telegram-bot (polling)
     │  assistant.message.inbound  CloudEvent
     ▼
Cortex-M WebSocket (/api/cortex-m/v1/connector/<sessionId>)
     │  assistant.message.outbound CloudEvent
     ▼
python-telegram-bot
     │  (reply)
     ▼
Telegram user
  1. On startup the connector calls GET /connector to obtain a session UUID.
  2. It opens a WebSocket to /connector/<sessionId>.
  3. Every Telegram text message is wrapped in a assistant.message.inbound CloudEvent 1.0 and sent over the WebSocket.
  4. Cortex-M replies with an assistant.message.outbound CloudEvent; the connector forwards the text back to the Telegram chat.
  5. The WebSocket is kept alive and reconnected automatically on failure (exponential backoff, 2 s → 60 s).

Requirements


Configuration

All configuration is via environment variables.

Variable Required Default Description
TELEGRAM_TOKEN Telegram bot token from BotFather
TELEGRAM_ALLOWLIST Comma-separated list of allowed user IDs or usernames (empty = allow all)
CORTEX_M_URL Base URL of the Cortex-M API, e.g. http://cortex-m:8080/api/cortex-m/v1
CONNECTOR_ID telegram-1 Stable identifier for this connector instance (appears in CloudEvents source)
CORTEX_M_TIMEOUT 180 Seconds to wait for a Cortex-M reply before giving up (default: 3 minutes)

Running with Docker Compose

Create a .env file:

TELEGRAM_TOKEN=123456:ABC-DEF...
CORTEX_M_URL=http://cortex-m:8080/api/cortex-m/v1
CONNECTOR_ID=telegram-1

Then start the connector:

docker compose up -d

The docker-compose.yml references the pre-built image from GHCR. To build locally instead, replace image: with build: . or run:

docker compose up -d --build

Running locally (without Docker)

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

export TELEGRAM_TOKEN=123456:ABC-DEF...
export CORTEX_M_URL=http://localhost:8080/api/cortex-m/v1
export CONNECTOR_ID=telegram-1

python connector.py

Docker image

Pre-built images are published to GitHub Container Registry on every push to main and on version tags.

# Latest main branch build
docker pull ghcr.io/workaround-org/cortex-m-telegram:main

# Specific version
docker pull ghcr.io/workaround-org/cortex-m-telegram:v1.2.3

Dependencies

Package Purpose
python-telegram-bot Async Telegram Bot API client
websockets WebSocket client for Cortex-M
httpx HTTP client for session token request

See also