Codex is now in the ChatGPT mobile app

16 hours ago (openai.com)

(Someone deleted a comment about why you'd want a mobile Codex app. This is the answer I wrote.)

Once you've used these coding agents a lot, you develop a pretty intuitive feel for how they work, what they're capable of, what they're good at, and where their weaknesses are. Hopefully, you're already pretty familiar with the code base you're working on. Combining the two, this means you can get quite far essentially "vibe coding" (i.e. not looking at the actual code) on a new branch.

So if you have some idea or some issue you want to fix on the go, you just iterate with the agent for a bit (presumably no more than a couple hours) until the agent outputs an implementation. Here, I do claim there is some "skill" (which is a function of your codebase familiarity, general SWE ability, and facility with AI agents), and if you're good, this implementation will be halfway decent a high percentage of the time. Then when you're back at your desktop, you can review the changes carefully/do some proper testing/debugging etc. But you've saved a good chunk of time- an initial draft is already waiting for you.

  • For major new features on my SaaS this is exactly what I do on my phone/laptop sometimes over days or weeks. I never look at the code until I get a feeling that it's far enough along and then I will hop into the actual code and start manually making changes or using CC locally to make the changes iteratively over weeks until it's ready for release. In the early stages of a major new feature/product it can be counterproductive to closely monitor the AI. Of course like you said in your comment this requires very very strong knowledge of the code base and a lot of experience with using the agents in the first place. But once you can do this sort of workflow it's very powerful because you can do this in parallel with other work (just an hour or two per day over a week or two on your phone can get you to a really good first draft even on a major new feature/product. And of course I'm not saying it's ready for production that can still take weeks but that's not really the point)

  • But what if the code is on my laptop? Alongside the tools needed to work with it

    Case in point, I have a Rust project with a target/ directory with about 10GB. Compile times from scratch takes about 10 minutes. (I do not love this)

    With this mobile app I need to upload the code to the cloud, right? Or does OpenAI expects me to compile huge projects on my phone?

    • No, the phone connects to your local device. This isn't "codex web" on mobile. Basically you work through your desktop on your phone. So to be clear, there are security risks (you can wipe your entire desktop from your phone).

      4 replies →

    • Not sure about how it works with Codex now, but with Claude you can just start a terminal session of claude code with your code checked out locally on your computer, and then enable remote control which lets you control that session from your phone.

      So basically, it is like you are typing on your terminal on your computer from your phone.

    • I tried Codex web. It kinda sucks and OpenAI doesn't seem to be promoting it? Look elsewhere if you want a Linux VM in the cloud. (I quite like exe.dev and they do have good mobile support.)

      1 reply →

    • The processes you're controlling are on your computer, similarly to Claude remote control.

  • I've been vibe-refactoring a fork of get-shit-done (a skill collection for coding agents) for about the past week. I've had to revisit the same ideas multiple times because the agent doesn't always get it right at first, but it's still so much faster than I could have been at the same work + it's already mostly working (I've been dogfooding it for a day or two now). And I have gotten by just bringing up issues I notice from the LLM's implementation comments, rather than actually inspecting the code even once so far.

    (The refactor's been to support Jujutsu VCS.)

  • > i.e. not looking at the actual code

    You must be kidding me.

    • > There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.

      https://x.com/karpathy/status/1886192184808149383

      Forgetting code exists is by definition not suitable for serious work. However, OP said in the following paragraph, that this would be a first draft, and that the code would actually be reviewed and tested properly before being integrated.

      At which point it is by definition no longer vibe coding, because you do care about the code! It's just an AI assisted workflow, but now we call all of those vibe coding for some reason. (Naming things is hard!)

      If vibe coding means not caring about the code, then a literal translation of the term would be "not caring about coding" coding.

    • What OP said works quite well for a lot of tasks, and if you've set up base instructions on coding style they (Codex, Claude) generate code accordingly.

      A key point is that after the "vibe" session you should also have a lot of tests written. So they can easily refactoring the code afterwards if there are major aspects you don't like when you get back to your desktop.

    • I find funny the trend of software engineers being shocked at the idea that someone would issue a set of instructions to a coder and not look at the code, or only glance at it.

      How do you think the world has worked for the past thirty years? AI has just caught up with human skill is all.

    • Unbelievable. This is the silent de-skilling of this industry.

      Imagine saying that you don't need to look at the roads or have no hands on the wheel whilst driving because someone-else said that the car can 'drive' itself; therefore, no need for anyone (including taxi drivers) to learn how to drive.

      Just because a machine can generate plausible looking code does not mean you don't need to look at it or not know how it works or why it doesn't work.

Whats crazier is that Codex is free. I thought I had to pay to even try it out but nope, you can use the desktop app or cli for free, its apparently included in the free plan. You just have to sign in to your ChatGPT account.

Of course I am aware that the caveat here is that all my interaction is part of training, but I’m fine with that. Even Qwen Cli discontinued the free plan.

  • How much better is it than Claude? I have both but Claude sucks up so many tokens.

    • I stopped trying to use Claude to do anything with 4.7 because it sucks up so many tokens so quickly. I use the 4.6 model still and have switched to Codex for larger tasks. It also works better at more complex coding tasks than Claude for web apps that have python backends and typescript front ends.

    • I switched some time after Anthropic bricked their models with adaptive thinking. It's a legit mystery to me how people are still using CC professionally.

      Codex is far less frustrating and manages context better. It's also costing me about 1/3rd as much as Opus 4.7 on CC.

      1 reply →

    • 5.5 is absolutely comparable to opus 4.7 (both on highest effort), maybe even better. It generally seems less lazy, faster, and writes code closer to what I'd write. The only downside is that for very very long tasks, it can kind of lose track of the goal. For tasks under ten minutes I'll go with codex every time.

      3 replies →

  • I was really unimpressed by the free Codex (for nodejs/react dev). I think it must be using a less powerful model or they’re limiting it in some other way.

    • Are you specifically pointing at a different experience between free + paid? Or just that the free version is unimpressive?

      I'm using paid on TypeScript and it's genuinely terrific. Subjectively I think it has the edge over Opus.

      I'd be surprised if OpenAI is hamstringing the free version. That would seem crazy from a GTM PoV. If anything the labs seem to throttle the heavy paid users.

      3 replies →

    • The free version of ChatGPT is definitely worse as well. My SO uses the free version and I can tell a significant downgrade.

I’ve been using Codex from my phone for the past couple of months (through a tunnel, not this app).

I was initially quite excited, but I’ve found the results are less than great compared to being at a keyboard.

Something about the smaller screen size and/or lack of keyboard causes me to direct the agent less, which in turn creates more tech debt/code churn/etc.

Maybe I’m just showing my age, and I should practice voice dictation or something more, but my thoughts flow faster and more clearly on a keyboard (less ums).

  • I'm not sure I follow, you develop code on a remote machine by speaking to your phone and are unimpressed by the result?

    • It's not that I'm unimpressed by the results, it's that I think I'm saving time by pushing the agent along remotely, but the reality is that my messages to the agent(s) end up being a lot shorter, which inevitably leaves more up for interpretation.

      Don't get me wrong, I still use Codex (and sometimes Claude Code) remotely every day, and am overall excited for this release, it's just that the benefit wasn't as high as I had initially hoped.

      Part of this is due to the models getting better (no need to prod along with "continue"), and part of this is the nature of how I use my phone (short bursts of attention).

      But again, maybe I'm just old and prefer big screens with a keyboard.

      3 replies →

  • the ums are exactly the sign that you speak much faster than you type, so you need a pause for your thoughts to catch up

  • I've been trying voxtype (using whisper models) lately, and to my surprise all my ums are filtered out. It's really good now actually!

The ability to unblock or redirect longer-running work from a phone seems underrated. Curious how often people will actually manage active coding threads this way.

The right way to do this is Google Jules: the boundary is a git repository, the interface is a chat window you can open anywhere (yes, even simultaneously), the output is a diff you can choose to merge.

But, for whatever reason, no one uses Google Jules.

I don't want my phone to have the ability to execute things on my computer. Much less with a LLM in-between!

Dang, I thought this was going to be integration for Codex Cloud, not the (still not available for Linux) Codex App. Not even Codex CLI, alas. You can still access the Cloud option from a mobile browser well enough but I prefer an app UI for poking at the things on the go.

  • You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

    They might just not have cut a new build yet, today. It 'works' on master, but the mobile app thinks that your build is outdated (v0.0.0) if you build from master without overriding version, so probably easiest to wait until they cut a build if they haven't.

    • > You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

      Woah, hadn't seen this before!

      Off-topic, how long compile times do people have for codex-rs in openai/codex? Even my very beefy computer takes like 30 minutes to compile in release mode, makes me wonder why it's so slow and how this TUI got so large. But then I remember, agents like to write a lot of code, compilers get slower when they have to compile a lot of code :)

      2 replies →

  • Codex Cloud has been in the chatgpt app for quite some time now. If you click out of the new dialogs then you can access your cloud threads

Is there a native way to work remotely with Claude/Codex on a local folder or git repo on your main machine without having to connect it to GitHub? For creating apps for personal use I’d rather just keep the files local.

Edit: Running into issues setting it up on Windows. There's no "/remote-control" command in the CLI, so I installed the Windows Codex app. Then I updated the iOS app which now has the "Codex" feature in the sidebar, which should allow remote access to the Windows machine's instance - except it doesn't connect. The iOS app shows my desktop's hostname, so it knows there's an instance there, but refuses to connect. Issues like this would persuade a lot of folks to switch back to Claude.

  • This is what /remote-control does in Claude Code, once it's running on your main machine. You can open it up in the phone app.

    • It flakes out in less than 24 hrs. I tried leaving a session open on remote control mode in a VM but it inevitably stopped with some token auth error.

  • That’s this announcement.

    • I ask because I tried the other week to use /remote-control in Claude, and it prompted to connect a Github repo with no local alternative. Things may have changed since then.

      My experience today with the new Codex remote control has been that it doesn't connect at all.

  • I tried apps that do this workflow (happy coder being one), but the workflow itself is rather clunky. You have to first start the session inside the remote machine. I now only do ssh, I can start or resume on whatever device suits at the time. The only downside is latency and connection drops, mosh solves it.

Just tried it and it doesn't work... won't let me create a new task as the repo selection is disabled... works fine on my laptop on the other hand and have been using it for some time.

The best part is you don't have to stay at home waiting codex thinking, you can go out grab a coffee while connect your Codex and ask it to work

  • Surely you mean grab a coffee and sit back down at your desk in your corporate office, because working remotely while your agent also does so is just preposterous.

I've been trying out various mobile, ai-assisted coding workflows.

Packing a Linux mini-pc in my rucksack, connected to display glasses, and voice-to-text with handy. Voice to text gets injected into a remote (Docker) codex session, running a hot reload web stack. I prompt to implement various features in an existing code base, where codex understands the structure and requirements. If a feature is done, I take a moment to inspect the results on the display glasses, then move onto the next feature or keep iterating. It's not perfect, but I was able to implement a couple of not too complex features while walking my local national park. The display glasses have a built-in 4-microphone array, and solid speakers. No need for a bulky headset or earbuds. Glasses come with monochromatic dimming, you can easily switch between dimming and see through.

If this comes with Linux integration, I will certainly give it a try.

This is good not because I could work on code on the go. Codex excels not only at that but also at crunching through text. It's nice that now I can get an agent in my phone that understands my notes in Logseq. It's like my journals can now talk back to me.

I wish they'd have done this in a separate Codex app. On desktop I greatly prefer having Codex separate from ChatGPT... As compared to Claude, which is growing so fast and adding features so quickly it seems bolted together (I get why they do it, integrations/MCP-wise).

This specific feature is more akin to Remote Control in Claude. You could already kick off Codex Cloud tasks (although it's just a little more fiddly to do so).

If you can move to Codex Cloud (or "Claude Code for the Web"), I think it's the superior approach. Start it there, and just pick it up from the PR if necessary.

  • OpenAi wants non devs to start coding as well, its just going to confuse users when there are two Apps.

A while ago I created a telegram bridge for AWS Kiro CLI, this allows my to talk to the agent running on my server from anywhere. Any remote access to any of these agents is a massive game changer, it means that you don't need to hover infront of the pc while it works away at the problems. It changes your workflow, but I do find you need to force yourself to "turn off", its easy to do that with the PC, eg, just walk away, but when you can just "get the agent to do one more change" while waiting to pick the kids up or taking the dog for a walk, it can get difficult to stop.

This is extremely what Ive been wanting -- I had previously thought about using one of the hackish apps that try to deliver this experience - or spinning up something for this myself ... - but integrating this directly is definitely the right way to provide the best system and product experience -- and this seems to work out of the box exactly as I would want!

Wondering is it only me who vibe coded PWA mobile IDE and remote agent hosted on the laptop, which uses claude -p and local code to allow coding via mobile?

> Stay connected to active work from anywhere

And here I thought AI was gonna automate the world and we were gonna work less.

Turns out you’re gonna work 24/7 no matter where you are!

  • This is a very myopic and unnecessary cynical sentiment. It's not about you - agents just need to run without your computer being on all the time. Coding is a background task that needs to run unattended now.

    • This is absolutely not true. I run dozens of Opus agents all day and they need so much constant attention and babysitting (lest everything turn to sh*t) that I would not qualify it anywhere close to “background”. And I’m sure as hell not wrangling these things from my *phone*.

      2 replies →

This is the EXACT evolution of this product that I've wanted. For simple tasks on some of my desktop machines, I don't want to mess with SSH or remoting into them, I just want to tell an AI agent what I need, let it build a plan that I press "Approve" on, and let it rip. This is the ClawdBot killer!

I have been using Omnara now some months, on desktop and mobile. It's web/mobile remote for Claude and Codex.

I can do some tasks on mobile, especially if they are follow up and steering only, greatly increasing productivity as you can keep working whilst in transit, etc.

Nice. Next step is giving codex/Claude Code local device control...problem is the current ios/android are so locked down that agents can't do much ...but the space is so ripe for disruption that I bet we'll see AI-native devices coming out within the next few years that allow agents to interact with everything. I would be nervous if I were apple right now.

I use Termius on my phone to remote and make agent do stuff while i chill or am on road. This seems useful too.

As the winner of the everything app is revealed, I foresee this feature integrated in it. One platform to manage all agents by any provider.

Somewhat related, some of these AI remote coding apps are iOS, what are people using for Android? Looks like some people are using terminal emulators to ssh into their machine and use the LLM CLIs but that seems clunky.

Oh no, I am just adding codex integration to my app with in-app tailscale networking, communicating with codex app server via websocket over tailscale

But I will still consider to release it anyways

So Codex is also heading towards 'portability' and I can see that here but I bet this will take time before it's cleanly optimized for mobile hard use

This feature could well be the reason OpenAI hired Peter Steinberger (OpenClaw).

  • That was my thought too. Claude Code and Codex are very close to Claw already (general purpose computer use) and moving increasingly in that direction (mobile integrations, built in memory features etc.)

    The main issue is reliability, so I think the corporations are going to take a much more gradual, piecemeal approach, and probably end up with something like Claw within a year.

This is neat! Now I'm curious, what's left to innovate in the coding agent space? Sure there are the usual suspects like maintenance, security, reliability and other scalability improvements and looks like they will be addressed in the next year or two.

  • The entire UI/UX? We went back in time and basically have a text streamer in a 70's style terminal or existing editor-like situation. If you want to read and (hand)write code, sure, you might be done and be happy with the new variant of what you had decades ago.

    A UI like Jira/Trello to stage features and see (agentic)team status. A Figma-like UX to actually build out the app/interface/features. A system that aids human review. There's tons of paradigms to explore and improve upon.

  • there is something "wrong" with the ux that is hard to pin down. these things generate even text summaries more rapidly than i can read them. i need a better method for dumping info into my brain + dynamic control (if necessary)

    • Tell it to create html summaries with diagrams and sidebar for navigation.

      Or ask Codex to create image that explains xyz.

    • When I take time to read all of the output, I often find that it's mostly noise. I don't like noise so I usually don't bother.

      But a person can use subagents, if they want, to filter that down. This burns tokens in a big hurry, but I think subagents can be arbitrary local commands (eg, a local LLM).

      Or, you know: Just slow down. :) It doesn't always have to be a race, does it?

  • Agent farms. Have agents make tons of random high fidelity variations around the clock of the same app or feature from some vague ideas, and you use each of them to see which one you like best and can productize, and you skip the need to do iterative prompts.

  • Is texting your Coding Agent really the final form? Something that watches your interactions or process execution to surface improvements, or whips up prototypes while you brainstorm seems like the next step.

    • Not sure why this was flagged, this makes sense but only if inference gets sufficiently cheap. It would be awesome to see a bunch of interactive prototypes and iterate on the UX before ever building the full app. Historically that's been somewhat difficult even with UX designers.

I don't understand OpenAI's product strategy.

  • It seems pretty simple:

    1) Keep getting investors to give them money.

    2) Convince the right people that OpenAI is "critical to national security" so that when 1 runs out, they can get bailed out by the government.

    Everything else is just set dressing.

This sucks. Codex was already in the mobile app. And Codex in the browser or in the app sucks because it's not the same as local Codex (VS Code or CLI). And you can't pick the model. Sucks a$$.

  • Isn’t what you described exactly what they just released? Now it connects to local Codex and you can pick the model?

    • What if you don't start on your laptop or workstation? Also, does the UI shown in the video reveal a model toggle? Doesn't look like it

      Edit: actually it gets worse -- you can't start any tasks any longer in your mobile, you are required to sign in from your desktop/workstation. You can't "sign in" from your CLI.

      Whoever made this, paper cuts I wish onto you. It sucks.

      Edit2: actually you can ssh, so I presume that allows you to do the CLI -- and you still can do mobile first tasks, but it's not intuitive at all. Mobile first tasks you still can't pick the model, and I haven't tested the workstation connection one. That said, indeed paper cuts.

i'm not sure if i'm hallucinating, but i swear i had codex in the chatGPT app from long time ago (like the original codex on the web).

they added some new stuff, like remote control to wherever the desktop codex app is running, but these companies need to work much more on their press releases.

friends, you don’t have to always be productive. leave the agent on the computer and take care of yourself.

  • For many people, that's exactly why this is useful: less time on the computer, more time doing other things and occasionally checking in.

    In those scenarios, the goal is not "work at any time" but to "be anywhere at any time", or, rather, to "be able to work from anywhere, doing anything".

    Sort of....I guess.

    • I love how all these software engineers are willingly walking into a productivity trap where they will be exploited.

      I’m not a swe but damn, I’d hate to be one.

> Start investigating a bug while waiting for your coffee.

So the whole idea is not to make work more efficient. It's just to make you work more, all the time, while waiting for your coffee, while in your commute.

Ask yourselves: is that the society we want?

Codex has been great in the last 3-4 months I've been using it, almost exclusively to review existing GDScript code, and this was the feature I wanted most, because with gamedev you get the best ideas when you're out and about or in bed :)

Claude on the other hand has been jank all around from the UX to the UI to the AI itself that it's baffling how it's more popular here on HN: https://i.imgur.com/jYawPDY.png

Sadly this remote control feature doesn't seem to be for Mac to Mac yet? I love the MacBook Neo as a "thin client" for AI and keep the MacBook Pro at home/hotel, and it would be nice to share Codex desktop sessions (without SSH → resume link)

It's refreshing that unlike Anthropic's Remote Control, this actually... works.

Feels like a testament to the value in taking time and doing it properly.

Now if only codex got its 1M token context window back.

---

Edit: Hmmm. Maybe I spoke too soon. Sigh. Definitely _more_ reliable by far overall, but still have queued messages with responses on my phone that don't show up on my computer, and responses that don't show up on my phone.

Edit 2: New threads created from my phone seem to have a little stall-out, but ones that are underway are behaving reasonably well.

  • Out of curiosity, what issues did you face with remote control on claude? I use it daily and it seems to work pretty well (bar the issues when my Mac would sleep and then the session would disconnect, but that's an issue on my end).

    • My own experience has been that it works for about five minutes before it just disconnects or hangs. I’ve never been able to use it successfully.

    • Myriad, to be honest. I find it to just constantly be in a 'torn' state, the UI is very mushy on mobile with a lot of the affordances from desktop missing, and... it's distinctly less useful when you can't... edit, rewind, start a new thread, etc.

Can someone explain how the ChatGPT Codex Connector works in concert with GitHub access controls? I am not sure how to add it to my GitHub repositories, accounts, or organizations without potentially giving any OpenAI customer access.

Say what you want about OpenAI, but their software is actually pretty dam good especially compared to Anthropic and Google. Anthropic is just sloppy, and Google just doesn't live on this planet.

Both of the Codex apps are very good.

I tried this out and it works significantly better than Claude's remote control in fact the first few times I tried Claude's remote control it didn't even work and to this day is very buggy.

  • I use remote-control every day and haven't had many issues with it, aside from the fact that the mobile app being pretty limited, e.g there are no prompt suggestions like slash commands and skills, everything in the textbox is just a raw string. You also can’t start a new session directly from the app (have to SSH into the host manually to do that)

    Other than those limitations, the connection has been very stable for me, definitely more reliable than alternatives like happy.engineering or Omnara. What’s been buggy for you specifically?

I don't like this direction. For accessibility aspect, sure it is good. But Codex is a coding product. I am increasingly concerned of lack of reviewing practice. I doubt that a mobile app is good for reviewing code changes.

> Stay connected to active work from anywhere

... (and anytime because it's on your phone). No thanks.

opencode behind a nginx proxy with a standard user/password is sufficiently powerful. You can also upgrade to https://docs.linuxserver.io/images/docker-code-server/ and run any vscode plugins; opencode's plugin is pretty rudimentry but cline has been making a lot of strides.

You can run your local LLM and just connect the docker containers. I'm paranoid of being disconnected from the LLM, so I never run any of this on the same machine, so orchestrating a docker-compose file that provides the necessary services is important.

I'm still trying to find a good remote file system to loop into the setup for improved switching between cli and these web containers.

They needed to announce something after the Anthropic slop rewrite of Bun.

In an ideal world the would allocate 50% of compute to find errors in that rewrite and publish how bad Claude is, but that would undermine confidence in slop in general so that is not going to happen.

The best way I've found to work with LLMs is another OpenAI project, Symphony (which I implemented for Linear/GitHub and OpenCode[0]).

It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.

[0] https://github.com/skorokithakis/symphony