RemoteAgent.CHAT best practices: getting the most out of your AI coding agent
Pairing takes five minutes. But the developers who get real leverage from RemoteAgent.CHAT aren't the ones who set it up fastest — they're the ones who develop good habits around how they use it. This is a collection of those habits.
One agent per project, not one agent for everything
The most common mistake people make with RemoteAgent.CHAT is treating a single agent as a general-purpose assistant for their entire stack. In practice, agents work best when they have a clear, stable context — a single repository, a single working directory, a consistent codebase they "know."
When you run remoteagent init, you're anchoring that agent to a specific project path. Every command you send is executed from that directory. The agent has one job: work on that project. If you have a backend and a frontend, initialize two separate agents. Switch between them in Telegram with /repo. This keeps context clean and makes the agent's behavior predictable.
A useful rule of thumb: if you'd open a separate terminal tab for it, it deserves a separate agent.
Write prompts like you're briefing a senior developer, not issuing a command
AI coding agents don't need terse commands. They benefit from context — the same context you'd give a colleague who wasn't in yesterday's meeting. "Fix the bug" is a weak prompt. "The /api/orders endpoint returns a 500 when the cart is empty. The error is in orders.service.ts around line 84. The fix should handle the empty array without crashing and return a 200 with an empty list" is a prompt that produces useful results on the first try.
From your phone, voice messages are often more effective than typing. You can describe a problem in 30 seconds of natural speech in a way that would take three minutes to type. RemoteAgent.CHAT transcribes voice messages and sends them to the agent as text — so don't hesitate to use them when the context is complex.
Use async to your advantage
The architecture of RemoteAgent.CHAT is fundamentally asynchronous. You send a prompt, the agent works, you get a notification when it's done. This is not a limitation — it's the whole point. Use it.
Instead of watching a terminal waiting for output, send the prompt and put your phone down. Go make coffee. Sit in a meeting. Walk the dog. When the agent finishes, Telegram pings you. This is a fundamentally different relationship with your tools than the one you have at a desk, and it takes a little adjustment to appreciate.
The practical implication is that prompts worth sending are prompts that can run independently for a few minutes. "Write the full test suite for the auth module" is a great async prompt. "What does this function do?" is a question you might as well answer yourself faster. Reserve the agent for tasks that take real time, and use that time for other things.
Chunk large tasks into reviewable steps
AI agents can produce a lot of output quickly — sometimes too quickly. A prompt like "refactor the entire API layer" might result in 40 files changed before you've had a chance to review direction. The better approach is to break large tasks into steps you can review between each one.
A practical pattern: ask the agent to describe its plan first, without writing any code. "Outline the changes you'd make to refactor the API layer" takes 30 seconds and gives you a chance to correct course before the agent does anything irreversible. Once you're aligned on the approach, send the next prompt to execute the first step. Review, then continue.
This is especially important when working from a phone, where reviewing a large diff is harder than at a desktop. Smaller steps mean smaller outputs, easier review, and faster iteration.
Keep PM2 between you and process crashes
Nothing breaks the async workflow more reliably than an agent that has silently gone offline. The agent process can die for many reasons — a transient out-of-memory condition, an unexpected signal, a runner crash. Without PM2, you'd only notice when you send a prompt and hear nothing back.
PM2 restarts the process automatically and logs what happened. The setup is three commands:
npm install -g pm2
pm2 start remoteagent --name "my-project" -- start
pm2 save && pm2 startupRun the command that pm2 startup prints. After that, your agent survives crashes, server reboots, and anything else the OS throws at it — without you having to intervene.
Use offline alerts as your safety net
RemoteAgent.CHAT monitors whether your agent is reachable and sends you a Telegram alert when it goes offline. Enable this in your dashboard. It means that even if PM2 can't restart the process — because the server itself is down, or because something more fundamental has gone wrong — you'll know within minutes rather than hours.
Treat the offline alert as signal, not noise. When it fires, the first thing to check is pm2 logs my-project — the error that caused the crash is almost always there. Nine times out of ten it's a transient issue and the agent is already back online by the time you look. The tenth time, you'll be glad you knew about it before a deadline.
One API key per project when it matters
RemoteAgent.CHAT supports per-agent Anthropic API keys. The key you set duringremoteagent init is stored locally in that agent's config file — never transmitted to our servers — and is completely independent from any other agent on the same machine.
This matters in a few scenarios. If you're working on projects for different clients and want billing to be separate, give each agent its own key. If you want to apply different usage limits or rate limits per project, configure them at the key level in your Anthropic console. The agent never mixes keys between projects.
Send screenshots when words aren't enough
RemoteAgent.CHAT accepts image attachments from Telegram. This is more useful than it sounds. If you're trying to describe a UI bug, a layout problem, or a graph that doesn't look right, a screenshot gives the agent exactly the context it needs — faster and more precisely than any written description.
The workflow is simple: take a screenshot on your phone, attach it to your message in Telegram, and add your prompt as the caption. The image and the prompt arrive together. The agent sees both.
Check the dashboard when something feels off
The RemoteAgent.CHAT dashboard shows your session history — every command you've sent, the agent it went to, its status, and a preview of the output. When a task seems to have run but you're not sure if it completed successfully, the dashboard is the first place to look. You'll see whether the session is still running, whether it completed cleanly, or whether it ended in an error.
It's also useful for auditing what the agent has been doing when you come back after a break. A quick scroll through the session list tells you what ran, in what order, and what the outcome was — without needing to grep through logs or open a terminal.
Keep the agent up to date
RemoteAgent.CHAT releases are frequent and the installer is idempotent — running it again just upgrades to the latest version without touching your config or agent state. Make a habit of updating whenever you hear about a new release:
curl -fsSL https://remoteagent.chat/install | bashRunners evolve quickly too. Claude Code, Aider, and Gemini CLI all ship updates that improve output quality and stability. If you're using a specific runner and notice degraded behavior, checking for a runner update is usually the fastest path to improvement.
Start with a 14-day free trial
Connect any AI coding agent to Telegram in under 5 minutes. No credit card required.