← Back to context

Comment by bcherny

5 months ago

Hi everyone! Boris from the Claude Code team here. @eschluntz, @catherinewu, @wolffiex, @bdr and I will be around for the next hour or so and we'll do our best to answer your questions about the product.

One thing I would love to have fixed - I type in a prompt, the model produces 90% or even 100% of the answer, and then shows an error that the system is at capacity and can't produce an answer. And then the response that has already been provided is removed! Please just make it where I can still have access to the response that has been provided, even if it is incomplete.

The biggest complaint I (and several others) have is that we continuously hit the limit via the UI after even just a few intensive queries. Of course, we can use the console API, but then we lose ability to have things like Projects, etc.

Do you foresee these limitations increasing anytime soon?

Quick Edit: Just wanted to also say thank you for all your hard work, Claude has been phenomenal.

  • We are definitely aware of this (and working on it for the web UI), and that's why Claude Code goes directly through the API!

    • I'm sure many of us would gladly pay more to get 3-5x the limit.

      And I'm also sure that you're working on it, but some kind of auto-summarization of facts to reduce the context in order to avoid penalizing long threads would be sweet.

      I don't know if your internal users are dogfooding the product that has user limits, so you may not have had this feedback - it makes me irritable/stressed to know that I'm running up close to the limit without having gotten to the bottom of a bug. I don't think stress response in your users is a desirable thing :).

      6 replies →

    • The problem with the API is that it, as it says in the documentation, could cost $100/hr.

      I would pay $50/mo or something to be able to have reasonable use of Claude Code in a limited (but not as limited) way as through the web UI, but all of these coding tools seem to work only with the API and are therefore either too expensive or too limited.

      2 replies →

  • I paid for it for a while, but I kept running out of usage limits right in the middle of work every day. I'd end up pasting the context into ChatGPT to continue. It was so frustrating, especially because I really liked it and used it a lot.

    It became such an anti-pattern that I stopped paying. Now, when people ask me which one to use, I always say I like Claude more than others, but I don’t recommend using it in a professional setting.

  • If you are open to alternatives, try https://glama.ai/gateway

    We currently serve ~10bn tokens per day (across all models). OpenAI compatible API. No rate limits. Built in logging and tracing.

    I work with LLMs every day, so I am always on top of adding models. 3.7 is also already available.

    https://glama.ai/models/claude-3-7-sonnet-20250219

    The gateway is integrated directly into our chat (https://glama.ai/chat). So you can use most of the things that you are used to having with Claude. And if anything is missing, just let me know and I will prioritize it. If you check our Discord, I have a decent track record of being receptive to feedback and quickly turning around features.

    Long term, Glama's focus is predominantly on MCPs, but chat, gateway and LLM routing is integral to the greater vision.

    I would love feedback if you are going to give a try frank@glama.ai

    • The issue isn't API limits, but web UI limits. We can always get around the web interface's limits by using the claude API directly but then you need to have some other interface...

      6 replies →

    • Just tried it, is there a reason why the webUI is so slow?

      Try to delete (close) the panel on the right on a side-by-side view. It took a good second to actually close. Creating one isn't much faster.

      This is unbearably slow, to be blurt.

    • Who is glama.ai though? Could not find company info on the site, the Frank name writing the blog posts seems to be an alias for Popeye the sailor. Am I missing something there? How can a user vet the company?

  • this is also my problem, ive only used the UI with $20 subscription, can I use the same subscription to use the cli? I'm afraid its like those aws api billing where there is no limit to how much I can use then get a surprise bill

Claude is my go to llm for everything, sounds corny but it's literally expanding the circle of what I can reasonably learn, manyfold. Right now I'm attempting to read old philosophical texts (without any background in similar disciplines), and without claude's help to explain the dense language in simpler terms & discuss its ideas, give me historical contexts, explaining why it was written this or that way, compare it against newer ideas - I would've given up many times.

At work I used it many times daily in development. It's concise mode is a breath of fresh air compared to any other llm I've tried. It has helped me find bugs in foreign code bases, explain me the techstack, written bash scripts, saving me dozens of hours of work & many nerves. It generally makes me reach places I wouldn't without due to time constraints & nerves.

The only nitpick is that the service reliability is a bit worse than others, forcing me sometimes to switch to others. This is probably a hard to answer question, but are there plans to improve that?

I'm in the middle of a particularly nasty refactor of some legacy React component code (hasn't been touched in 6 years, old class based pattern, tons of methods, why, oh, why did we do XYZ) at work and have been using Aider for the last few days and have been hitting a wall. I've been digging through Aider's source code on Github to pull out prompts and try to write my own little helper script.

So, perfect timing on this release for me! I decided to install Claude Code and it is making short work of this. I love the interface. I love the personality ("Ruminating", "Schlepping", etc).

Just an all around fantastic job!

(This makes me especially bummed that I really messed up my OA awhile back for you guys. I'll try again in a few months!)

Keep on doing great work. Thank you!

Just started playing with the command-line tool. First reaction (after using it for 5 minutes): I've been using `aider` as a daily driver, with Claude 3.5, for a while now. One of the things I appreciate about aider is that it tells you how much each query cost, and what your total cost is this session. This makes it low-key easy to keep tabs on the cost of what I'm doing. Any chance you could add that to claude-code?

I'd also love to have it in a language that can be compiled, like golang or rust, but I recognize a rewrite might be more effort than it's worth. (Although maybe less with claude code to help you?)

EDIT: OK, 10 minutes in, and it seems to have major issues doing basic patches to my Golang code; the most recent thing it did was add a line with incorrect indentation, then try three times to update it with the correct indentation, getting "String to replace not found in file" each time. Aider with claude 3.5 does this really well -- not sure what the counfounding issue is here, but might be worth taking a look at their prompt & patch format to see how they do it.

One of the silver bullets of Claude, in the context of coding, is that it does NOT use RAG when you use it via the web interface. Sure, you burn your tokens but the model sees everything and this let it reply in a much better way. Is Claude Code doing the same and just doing document-level RAG, so that if a document is relevant and if it fits, all the document will be put inside the context window? I really hope so! Also, this means that splitting large code bases into manageable file sizes will make more and more sense. Another Q: is the context size of Sonnet 3.7 the same of 3.5? Btw Thanks you so much for Claude Sonnet, in the latest months it changed the way I work and I'm able to do a lot more, now.

  • Right -- Claude Code doesn't use RAG currently. In our testing we found that agentic search out-performed RAG for the kinds of things people use Code for.

Been a long time casual — i.e. happy to fix my code by asking questions and copy/pasting individual snippets via the chat interface. Decided to give the `claude` terminal tool a run and have to admit it looks like a fantastic tool.

Haven't tried to build a modern JS web app in years — it took the claude tool just a few minutes of prompting to convert and refactor an old clunky tool into a proper project structure, and using svelte and vite and tailwind (which I haven't built with before). Trying to learn how to even scaffold a modern app has felt daunting and this eliminates 99% of that friction.

One funny quirk: I asked it to build a test suite (I know zilch about JS testing frameworks, so it picked vitest for me) for the newly refactored app. I noticed that 3 of the 20 tests failed and so I asked it to run vitest for itself and fix the failing things. 2 minutes later, and now 7 tests were failing...

Which is very funny to me, but also not a big deal. Again, it's such a chore to research test libs and then set things up to their conventions. That the claude tool built a very usable scaffold that I can then edit and iterate on is such a huge benefit by itself, I don't need (nor desire) the AI to be complete turnkey solution.

Anthropic is back and cementing its place as the creator of the best coding models—bravo!

With Claude Code, the goal is clearly to take a slice of Cursor and its competitors' market share. I expected this to happen eventually.

The app layer has barely any moat, so any successful app with the potential to generate significant revenue will eventually be absorbed by foundation model companies in their quest for growth and profits.

  • I think an argument could be reasonably made that the app layer is the only moat. It’s more likely Anthropic eventually has to acquire Cursor to cement a position here than they out-compete it. Where, why, what brand and what product customers swipe their credit cards for matters — a lot.

  • I wonder if they will offer competitive request counts against Cursor. Right now, at least for me, the biggest downside to Claude is how fast I blow through the limits (Pro) and hit a wall.

    At least with Cursor, I can use all "premium" 500 completions and either buy more, or be patient for throttled responses.

    • Reread the blog post, and I suspect Cursor will remain much more competitive on pricing! No specifics, but likely far exceeding typical Cursor costs for a typical developer. Maybe it's worth it, though? Look forward to trying.

      >Claude Code consumes tokens for each interaction. Typical usage costs range from $5-10 per developer per day, but can exceed $100 per hour during intensive use.

      2 replies →

  • hi! I've been using Claude Code in a very complementary way to my IDE, and one of the reasons we chose the terminal is because you can open it up inside whichever IDE you want!

Why not just open source Claude Code? people have tried to reverse eng the minified version https://gist.githubusercontent.com/1rgs/e4e13ac9aba301bcec28...

Hi Boris, love working with Claude! I do have a question—is there a plan to have Claude 3.5 Sonnet (or even 3.7!) made available on ca-central-1 for Amazon Bedrock anytime soon? My company is based in Canada and we deal with customer information that is required to stay within Canada, and the most recent model from Anthropic we have available to us is Claude 3.

A minor ChatGPT feature I miss with Claude is temporary chats. I use ChatGPT for a lot of random one-off questions and don’t want them filling up my chat history with so many conversations.

Hi and congrats on the launch!

Will check out Claude Code soon, but in the meantime one unrelated other feature request: Moving existing chats into a project. I have a number of old-ish but super-useful and valuable chats (that are superficially unrelated) that I would like to bring together in a project.

I really want to try your AI models, but "You must have a valid phone number to use Anthropic's services." is a show-stopper for me.

It's the only mainstream AI service that requests this information. After a string of security lapses by many of your competitors, I have zero faith in the ability of a "fast moving" AI-focused company to keep my PII data secure.

  • It's a phone number. It's probably been bought / sold a few times already. Unless you're on the level of Edward Snowden, I wouldn't worry about it. But maybe your sense of privacy is more valuable than the outcome you'd get from Claude. That's fine too.

    • It's my phone number... linked to my Google identity... linked to every submitted user prompt... linked to my source code.

      There's also been a spate of AI companies rushing to release products and having "oops" moments where they leaked customer chats or whatever.

      They're not run like a FAANG, they don't have the same security pedigree, and they generally don't have any real guarantee of privacy.

      So yes, my privacy is more valuable.

      Conversely: Why is my non-privacy so valuable to Anthropic? Do they plan on selling my data? Maybe not now... but when funding gets a bit tight? Do they plan on selling my information to the likes of Cambridge Analytica? Not just superficial metadata, but also an AI-summarised history of my questions?

      The best thing to do would be not to ask. But they are asking.

      Why?

      Why only them?

      2 replies →

  • I pay for a number from voip.ms and use sms forwarding. Its very cheap and it works on telegram as well which seemed fairly strict at detecting most voips.

Does the fact its so ungodly expensive and highly rate limited kind of prove the modern point that AI actually uses tons of water and electricity per prompt? People are used to streaming YouTube while they sleep and it's hard to think of other web technology this intensive. OpenAI is hostile to this subject. Does Claude have plans to tackle this?

  • > People are used to streaming YouTube while they sleep

    Youtube is used to showing them ads while they sleep

Hi! I’ve been using Claude for macOS and iOS coding for a while, and it’s mostly great, but it’s always using deprecated APIs, even if I instruct it not to. It will correct the mistake if I ask it to, but then in later iterations, it will sometimes switch back to using a deprecated API. It also produces a lot of code that just doesn’t compile, so a lot of time is spent fixing the made up or deprecated APIs.

Awesome to see a new Claude model - since 3.5 its been my go-to for all code related tasks.

I'd really like to use Claude Code in some of my projects vs just sharing snippets via the UI but I'm curious how might doing this from our source directory affect our IP including NDA's, trade secret protections, prior disclosure rules on (future) patents, open source licensing restrictions re: redistribution etc?

Also hi Erik! - Rob

Hi Boris et al, can you comment on increased conversation lengths or limits through the UI? I didn't see that mentioned in the blog post, but it is a continued major concern of $20/month Claude.ai users. Is this an issue that should be fixed now or still waiting on a larger deployment via Amazon or something? If not now, when can users expect the conversation length limitations will be increased?

It would be great if we could upgrade API rate limits. I've tried "contacting sales" a few times and never received a response.

edit: note that my team mostly hits rate limits using things like aider and goose. 80k input token is not enough when in a flow, and I would love to experiment with a multi-agent workflow using claude

Now that the world's gotten used to the existence of AI, any hope on removing the guardrails on Claude? I don't need it to answer "How do I make meth", but I would like to not have to social engineer my prompts. I'd like it to just write the code I asked for and not judge me on how ethical the code might be.

Eg Claude will refuse to write code to wget a website and parse the html if you ask it to scrape your ex girlfriend's Instagram profile, for ethical and tos reasons, but if you phrase the request differently, it'll happily go off and generate code that does that exact thing.

Asking it to scrape my ex girlfriend's Instagram profile is just a stand in for other times I've hit a problem where I've had to social engineer my way past those guard rails, but does having those guard rails really provide value on a professional level?

  • Not having headlines like "Claude Gives Stalker Instructions" has a significant value to their business I would wager.

    I'm very much in favour of removing the guardrails but I understand why they're in place. The problem is attribution. You can teach yourself how to engage in all manner of dark deeds with a library or wikipedia or a search engine and some time, but any resulting public outcry is usually diffuse or targeted at the sources rather than the service. When Claude or GPT or Stable Diffusion are used to generate something judged offensive, the outcry becomes an existential threat to the provider.

How is your largest customer, Cursor, taking the news that you'll be competing directly with them?

  • They probably aren't thrilled, but a lot of users will prefer a UI and I doubt Anthropic has the spare cycles to make a full Cursor competitor.

  • Unless Cursor had agreed to an exclusivity agreement with Anthropic, Antropic was (and still is) at risk of Cursor moving to a different provider or using their middleman position to train/distill their own model that competes with Anthropic.

  • honestly, is this something that anthropic should be worried about? you could ask the same question from all the startups that were destroyed by OpenAI.

Do you think Claude Code is "better", in terms of capabilities and token efficiency, than other tools such as Cline, Cursor, or Aider?

  • Claude Code is a research preview -- it's more rough, lets you see model errors directly, etc. so it's not as polished as something like Cline. Personally I use all of the above. Engineers here at Anthropic also tend to use Claude Code alongside IDEs like Cursor.

Thanks for the product! Glad to hear the (so called) "safety" is being walked back on, previously Claude has been feeling a little like it is treating me as a child, excited to try it out now.

In the console, TPM limit for 3.7 is not shown (I'm tier 4). Does it mean there is no limit, or is it just pending and is "variable" until you set it to some value?

  • We set the Claude Code rate limits to be usable as a daily driver. We expect hitting rate limits for synchronous usage to be uncommon. Since this is a research preview, we recommend you start small as you try the product though.

    • Sorry, I completely missed you're from the Code team. I was actually asking about the vanilla API. Any insights into those limits? It's still missing the TPM number in the console.

Your footnote 3 seems to imply that the low number for o1 and Grok3 is without parallelism, but I don't think it's publicly known whether they use internal parallelism? So perhaps the low number already uses parallelism, while the high number uses even more parallelism?

Also, curious if you have any intuition as to why the no-parallelism number for AIME with Claude (61.3%) is quite low (e.g., relative to R1 87.3% -- assuming it is an apples to apples comparison)?

Awesome work, Claude is amazingly good at writing code that is pretty much plug and play.

Could you speak at all about potential IDE integrations? An integration into Jetbrains IDEs would be super useful - I imagine being able to highlight a bit of code and having a plugin check the code graph to see dependencies, tests etc that might be affected by a change.

Copying and pasting code constantly is starting to seem a bit primitive.

Why gatekeep Claude Code, instead of releasing the code for it? It seems like a direct increase in revenue/API sales for your company.

  • I'm not affiliated with Anthropic, but it seems like doing this will commoditize Claude (the AIaaS). Hosted AI providers are doing all they can to move away from being interchangeable commodities; it's not good for Anthropic's revenue for users to be able to easily swap-out the backend of Cloud Code to a local Olama backend, or a cheaper hosted DeepSeek. Open sourcing Claude Code would make this option 1 or 2 forks/PRs away.

Is there a way to always accept certain commands across sessions? Specifically for things like reading or updating files I don't want to have to approve that each time I open a new repl.

Also, is there a way to switch models between 3.5-sonnet and 3.5-sonnet-thinking? Got the initial impression that the thinking model is using an excessive amount of tokens on first use.

  • When you are prompted to accept a bash command, we should be giving you the option to not ask again. If you're not seeing that for a specific bash command, would you mind running /bug or filing an issue on Github? https://github.com/anthropics/claude-code/issues

    Thinking and not thinking is actually the same model! The model thinks automatically when you ask it to. If you don't explicitly ask it to think, it won't use thinking.

    • with Claude coder, how does history work? I used it with my account, ran out of credit then switched to a work account but there was no chat history or other saved context of the work that had been done. I logged back in with my account to try copy it but it was gone.

  • Right now no, but if you run in docker, you can use `--dangerously-skip-permissions`

    Some commands could be totally fine in one context, but bad in a different i.e. pushing to master

For the pokemon benchmark, what happened after the Lt Surge gym? Did the model stall or run out of context or something similar?

A bit off topic but I wanted to let you know that anthropic is currently in violation of EU Directive 98/6/EC:

> The selling price and the unit price must be indicated in an unambiguous, easily identifiable and clearly legible manner for all products offered by traders to consumers (i.e. the final price should include value added tax and all other taxes).

I wanted to see what the annual plan would cost as it was just displaying €170+VAT, and when I clicked the upgrade button to find out (I checked everywhere on the page) then I was automatically subscribed without any confirmation and without ever seeing the final price before the transaction was completed.

  • You can stuff up your EU directives up your nose, like your bottle caps when you try to drink from a European bottle

    • The bottle caps are a joke, but how can anyone in their right mind be against transparent pricing?

      You think it's acceptable that a company say the price is €170+vat and then after the transaction is complete they inform you that the actual price was €206.50?

      1 reply →

Hi Boris! Thank you for your work on Claude! My one pet peeve with Claude specifically, if I may: I might be working on a Svelte codebase and Claude will happily ignore that context and provide React code. I understand why, but I’d love to see much less of a deep reliance on React for front-end code generation.

When I first started using Cursor the default behavior was for Claude to make a suggestion in the chat, and if the user agreed with it, they could click apply or cut and paste the part of it they wanted to use in their larger project. Now it seems the default behavior is for Claude to start writing files to the current working directory without regard for app structure or context (e.g., config files that are defined elsewhere claude likes to create another copy of). Why change the default to this? I could be wrong but I would guess most devs would want to review changes to their repo first.

  • Cursor has two LLM interaction modes, chat and composer. The chat does what you described first and composer can create/edit/delete files directly. Have you checked which mode you're on? It should be a tab above your chat window.

> We’ve also improved the coding experience on Claude.ai. Our GitHub integration is now available on all Claude plans—enabling developers to connect their code repositories directly to Claude

Would love to learn a bit more about how the GitHub integration works. From https://support.anthropic.com/en/articles/10167454-using-the... it seems it’s read only.

Does Claude Code let me take a generated/edited artifact and commit it back as a PR?

  • The https://claude.io/ integration is read-only. Basically you OAuth with GitHub and now you can select a repository, then select files or directories within it to add to either a Claude Project or to an individual prompt.

    Claude Code can run commands including "git" commands, so it can create a branch, commit code to that branch and push that branch to GitHub - at which point point you can create a PR.

hey guys! i was wondering why you chose to build Claude code via CLI when many popular choices like cursor and windsurf fork VScode. do you envision the future of Claude code to abstract away the codebase entirely?

  • We wanted to bring the model to people where they are without having to commit to a specific tool or radically change their workflows. We also wanted to make a way that lets people experience the model’s coding abilities as directly as possible. This has tradeoffs: it uses a lot of tokens, and is rough (eg. it shows you tool errors and model weirdness), but it also gives you a lot of power and feels pretty awesome to use.

    • I like this quite a bit, thank you! I prefer Helix editor and i hate the idea of running VSCode just to access some random Code assistant

I'm curious why there are no results for the "Claude 3.7 Extended Thinking" on SWE-Bench and Agentic tool use.

Are you finding that extended thinking helps a lot when the whole problem can be posed in the prompt, but that it isn't a major benefit for agentic tasks?

It would be a bit surprising, but it would also mirror my experiences, and the benchmarks which show Claude 3.5 being better at agentic tasks and SWE tasks than all other models, despite not being a reasoning model.

It would be amazing to be able to use an API key to submit prompts that use our Project Knowledge. That doesn't seem to be currently possible, right?

From the release you say: "[..] in developing our reasoning models, we’ve optimized somewhat less for math and computer science competition problems, and instead shifted focus towards real-world tasks that better reflect how businesses actually use LLMs."

Can you tell us more about the trade-offs here?

Also, are you using synthetic data for improving the responses here, or are you purely leveraging data from usage/partner's usage?

Thank you for the update!

I recently attempted to use the Google Drive integration but didn't follow through with connecting because Claude wanted access to my entire Google Drive. I understand this simplifies the user experience and reduced time to ship, but is there anyway the team can add "reduce the access scope of Google Drive integration" to your backlog. Thank you!

Also, I just caught the new Github integration. Awesome.

Small UX suggestion, but could you make submission of prompt via URL parameter work? It used to be possible via https://claude.ai/new?q={query}, but that stopped working. It works for ChatGPT, Grok, and DeepSeek. With Claude you have to go and manually click the submit button.

Did you guys ever fix the issue where if UK users wanted to use the API they have to provide a VAT number?

Love the UI so far. The experience feels very inspired by Aider, which is my current choice. Thanks!

Serious question: What advice would you give to a Computer Science student in light of these tools?

  • Serious answer: learn to code.

    You still need to know what good code looks like to use these tools. If you go forward in your career trusting the output of LLMs without the skills to evaluate the correctness, style, functionality of that code then you will have problems.

    People still write low level machine code today, despite compilers having existed for 70+ (?) years.

    We'll always need full-stack humans who understand everything down to the electrons even in the age of insane automation that we're entering.

    • Could not agree more! I have 20+ years experience and use Cursor/Sonnet daily. It saves huge amounts of time.

      But I can’t imagine this tool in the hands of someone who does not have a solid understanding of programming.

      You need to understand when to push back and why. It’s like doing mini code reviews all the time. LLMs are very convincing and will happily generate garbage with the utmost authority.

      Don’t trust and absolutely verify.

    • +1 to this. There has never been a better time to learn to code - the learning curve is being shaved down by these new LLM-based tools, and the amount of value people with programming literacy can produce is going up by an order of magnitude.

      People who know both coding and LLMs will be a whole lot more attractive to hire to build software than people who just know LLMs for many years to come.

      1 reply →

    • I will give a little more pessimistic answer. If someone is right now studying CS then probably have expectation that can work with this profession for 30-40 years until retirement and this profession will still pay much more than average salary for most of devs anywhere (instead only of elite devs or those in US) and easily to find such job or easily switch employer.

      I think the best period of Software Devs will be gone in few years. Knowing how how to code and fix things will be important still but more important to be also Jack-of-Many-Trades to provide more value: know a little about SEO, have a good taste of design and be able to tweak simple design, good taste how to organise code, better soft skills and managing or educating less tech-savvy stuff.

      Another option is to specialise in some currently difficult subfield: robotics, ML, CUDA, rust and try to be this elite dev with expectation would have to move to SV or any such tech hub.

      Best general recommendation I would give right now (especially for someone who is not from US) to someone who is currently studying is to use that a lot of time you have right now with not much responsibility to make some product that can provide you semi-passive income on a monthly basis ($5k-$10k) to drag yourself out of this rat race. Even if you not succeed or revenue stream will run out eventually you will learn those other skills that will be more important later if wanna be employed (SEO, code & design taste, marketing, soft skills).

      Because most likely this window of opportunity might be only for the next few years in similar way when the best window for Mobile Apps was first ~2 years when App Store started

      1 reply →

The thing I would like automated is highlighting a function in my code then ask the AI to move it to a new module-file and import that new module.

I would like this to happen easily like hitting a menu or button without having to write an elaborate "prompt" every time.

Is this possible?

Hi there. There are lots of phrases/patterns that Claude always uses when writing and it was very frustrating with 3.5. I can see with 3.7 those persist. Is there any way for me to contact you and show those so you can hopefully address them?

Any change there will be a way to copy and paste the responses into other text boxes (i.e., a new email) and not have to re-jig the formatting?

Lists, numbers, tabs, etc. are all a little time consuming... minor annoyance but thought I'd share.

Can you give some insight into how you chose the reply limit length? It seems to cut off many useful programs that are 80%-90% done and if the limit were just a little higher it would be a source of extraordinary benefit.

  • If you can reproduce that, would you mind reporting it with /bug?

    • Just tried it with claude 3.7 sonnet, here is the share: https://claude.ai/share/68db540d-a7ba-4e1f-882e-f10adf64be91 and it doesn't finish outputing the program. (It's missing the rest of the application function and the main function).

      Here are steps to reproduce.

      Background/environment:

      ChatGPT helped me build this complete web browser in Python:

      https://taonexus.com/publicfiles/feb2025/71toy-browser-with-...

      It looks like this, versus the eventual goal: https://imgur.com/a/j8ZHrt1

      in 1055 lines. But eventually it couldn't improve on it anymore, ChatGPT couldn't modify it at my request so that inline elements would be on the same line.

      If you want to run it just download it and rename it to .py, I like Anaconda as an environment, after reading the code you can install the required libraries with:

      conda install -c conda-forge requests pillow urllib3

      then run the browser from the Anaconda prompt by just writing "python " followed by the name of the file.

      2.

      I tried to continue to improve the program with Claude, so that in-line elements would be on the same line.

      I performed these reproduceable steps:

      1. copied the code and pasted it into a Claude chat window with ctrl-v. This keeps it in the chat as paste.

      2. Gave it the prompt "This complete web browser works but doesn't lay out inline elements inline, it puts them all on a new line, can you fix it so inline elements are inline?"

      It spit out code until it hit section 8 out of 9 which is 70% of the way through and gave the error message "Claude hit the max length for a message and has paused its response. You can write Continue to keep the chat going". Screenshot:

      https://imgur.com/a/oSeiA4M

      So I wrote "Continue" and it stops when it is 90% of the way done.

      Again it got stuck at 90% of the way done, second screenshot in the above album.

      So I wrote "Continue" again.

      It just gave an answer but it never finished the program. There's no app entry in the program, it completely omited the rest of the main class itself and the callback to call it, which would be like:

              def run(self):
                  self.root.mainloop()
          
          ###############################################################################
          # main
          ###############################################################################
          
          if __name__=="__main__":
              sys.setrecursionlimit(10**6)
              app=ToyBrowser()
              app.run()
      

      so it only output a half-finished program. It explained that it was finished.

      I tried telling it "you didn't finish the program, output the rest of it" but doing so just got it stuck rewriting it without finishing it. Again it said it ran into the limit, again I said Continue, and again it didn't finish it.

      The program itself is only 1055 lines, it should be able to output that much.

      1 reply →

Congrats on the launch! You said its an important tool for you (Claude Code) how does this fit in with Co-Pilot, Cursor, etc. Do you/your teammates only rely on Claude Code? What do you reach for for different tasks?

  • Claude Code is super popular internally at Anthropic. Most engineers like to use it together with an IDE like Cursor, Windsurf, VS Code, Zed, Xcode, etc. Personally I usually start most coding tasks in Code, then move to an IDE for finishing touches.

Is there plans to add websearch function over some core websites (SO, API docs)? Competitors have it, and in my experience this provide very good grounding for coding tasks (way less API functions hallucinated).

Does this actually have an 8k (or more) output context via the API?

3.5 did with a beta header but while 3.6 claimed to, it always cut its responses after 4k.

IIRC someone reported it on GH but had no reply.

Any way to parallelize tool use? When I go into a repo and ask "what's in here", I'm aiming for a summary that returns in 20 seconds.

My key got killed months ago when I tested it on a PDF, and support never got back to me so I am waiting for OpenRouter support!

Hi, what are the privacy terms for Claude Code? Is it memorizing the codebase it’s helping with? From an enterprise standpoint

Thank you to the team. Looks like a great release. Already switching existing prompts to Claude 3.7 to see the eval results :)

Thanks for this - exciting launch. Do you have examples of cool applications or demos that the HN crowd should check out?

Hi Boris,

Would it be possible to bring back sonnet 2024 June?

That model was the most attentive.

Because we lost that model this release a value loss for me personally.

I tried signing up to use Claude about 6 months ago and ran into an error on the signup page. For some reason this completely locked me out from signing up since a phone number was tied to the login. I have submitted requests to get removed from this blacklist and heard nothing. The times I have tried to reach out on Twitter were never responded to. Has the customer support improved in the last 6 months?

  • You can try using it through Github Copilot? Just as a different avenue for usage.

    • I don't want use the product after having a bad experience. If they cannot create a sign up page without it breaking for me why would I want to use this service? Things happen and bugs can occur, but the amount of effort I have put in to resolve the issue outweighs the alternatives that I have had no issues using.

Not a question but thank you for helping make awesome software that helps us make awesome software, too :)

Can you let the API team know that the /v1/models endpoint has been broken for hours? Thanks.

  • Hello! Member of the API team here. We're unable to find issues with the /v1/models endpoint—can you share more details about your request? Feel free to email me at suzanne@anthropic.com. Thank you!

    • It always returns a Not Found error for me. Using the curl command copied directly from the docs:

      $ curl https://api.anthropic.com/v1/models --header "x-api-key: $ANTHROPIC_API_KEY" --header "anthropic-version: 2023-06-01"

      {"type":"error","error":{"type":"not_found_error","message":"Not found"}}

      Edit: Tried creating a different API key and it works with that one. Weird.

      1 reply →

with Claude coder, how does history work? I used it with my account, ran out of credit then switched to a work account but there was no chat history or other saved context of the work that had been done. I logged back in with my account to try copy it but it was gone.

when there are two commands in a prompt example

do A and then do B.

the model completely ignores the second task B.

Hi @eschluntz, @catherinewu, @wolffiex, @bdr. Glad that you are so plucky and upbeat!

How do you feel about raking in millions while attempting to make us all unemployed?

How do you feel about stealing open source code and stripping the copyright?

Have you seen https://mycoder.ai? Seems quite similar. It was my own invention and it seems that you guys are thinking along similar lines - incredibly similar lines.

Folks, let me tell you, AI is a big league player, it's a real winner, believe me. Nobody knows more about AI than I do, and I can tell you, it's going to be huge, just huge. The advancements we're seeing in AI are tremendous, the best, the greatest, the most fantastic. People are saying it's going to change the world, and I'm telling you, they're right, it's going to be yuge. AI is a game-changer, a real champion, and we're going to make America great again with the help of this incredible technology, mark my words.