I had my Anthropic account banned (presumably) because I was testing out the vision capabilities and took a photo of a Japanese kitchen knife and asked it to "translate the characters on the knife into English". This wasn't a Claude Pro account, but an API account, so it's extra weird because what if I had some product based off the API, and an end user asked/searched for something taboo..does my entire business get taken offline? Good thing this was just a test account with like $10 in credit on it. They haven't responded to my "account suspension appeal" which is just a google form to enter your email address, not even a box to enter any details.
Anyways, Claude 3 Opus is pretty great for coding (I think better in most cases than the GPT4-Turbo previews) but I'm a bit weary of Anthropic now.
> They haven't responded to my "account suspension appeal" which is just a google form to enter your email address, not even a box to enter any details.
The complete lack of customer service is going to get more and more dystopian as these AI companies become more interwoven with everyday life.
Considering the hype and high traffic, I would assume they are just overwhelmed and can't resolve all customers issues fast enough.
Or maybe they decided to build a system for Claude to judge account suspension appeals and that's still in beta, and they won't throw humans at the task.
Were you still on the very first test account, e.g. before even adding any money?
I know indirectly Anthropic was the #1 target for a lot of ERPdenizens for a while now, so they're probably extremely trigger happy until you clear a hurdle or two.
I guess you can always use AI to detect inappropriate content from users... oh wait.
Seriously though, I understand that these mostly play to the enterprise market where even a hint of anything remotely "unsafe" needs to be shut down and deleted but why can't they allow us to turn off the strict filtering like Google does? Why can Google offer "unsafe" content (in a limited fashion but it's FINE) but LLM providers can't?
It's not an LLM provider problem. It's an Anthropic/Google culture problem. OpenAI would very likely not have any problems with a request like that, but Claude has struggled with an absurdly misaligned sense of ethics from the start.
Note that Google is a big investor into Anthropic, and Anthropic was created because a bunch of OpenAI people thought OpenAI wasn't being woke enough and quit as a consequence. So it's not a surprise that it's a lot more extremist than other model vendors.
That's one reason why Aider doesn't recommend you use it, even though in some ways it's slightly better at coding. Claude Opus will routinely refuse ordinary coding requests due to its misalignment, whereas GPT-4 will not. That better reliability more than makes up for any difference in skill or speed.
Is there a good alternative available in the EU? Anthropic announced it was available in the EU last month, but it seems now that they've changed their mind.
Well, our team has been using Claude Opus for the past month and we are now switching back to GPT-4. While the code is better, it is hard to make it do further modifications to the given code. Scores Low on the reasoning end in our experience.
And yet the UI for their consumer offering is hot garbage. I really don’t feel like it’s better than ChatGPT in capabilities and the UI is not as good. Not to mention there is no app to use on mobile.
I’ve been using the Claude 3 API since the models were announced. I believe it’s generally available (though capacity constrained & rate limited at present).
I had my Anthropic account banned (presumably) because I was testing out the vision capabilities and took a photo of a Japanese kitchen knife and asked it to "translate the characters on the knife into English". This wasn't a Claude Pro account, but an API account, so it's extra weird because what if I had some product based off the API, and an end user asked/searched for something taboo..does my entire business get taken offline? Good thing this was just a test account with like $10 in credit on it. They haven't responded to my "account suspension appeal" which is just a google form to enter your email address, not even a box to enter any details.
Anyways, Claude 3 Opus is pretty great for coding (I think better in most cases than the GPT4-Turbo previews) but I'm a bit weary of Anthropic now.
I just tried to make an account
1. Asks me to enter my phone number and sends me a code
2. Enter code
3. Asks me to enter email and get code
4. Enter code
5. Redirects to asking me to enter phone number, but my number is already used now
6. My account is automatically banned
Which country code?
2 replies →
> They haven't responded to my "account suspension appeal" which is just a google form to enter your email address, not even a box to enter any details.
The complete lack of customer service is going to get more and more dystopian as these AI companies become more interwoven with everyday life.
Considering the hype and high traffic, I would assume they are just overwhelmed and can't resolve all customers issues fast enough.
Or maybe they decided to build a system for Claude to judge account suspension appeals and that's still in beta, and they won't throw humans at the task.
1 reply →
Were you still on the very first test account, e.g. before even adding any money?
I know indirectly Anthropic was the #1 target for a lot of ERPdenizens for a while now, so they're probably extremely trigger happy until you clear a hurdle or two.
I guess you can always use AI to detect inappropriate content from users... oh wait.
Seriously though, I understand that these mostly play to the enterprise market where even a hint of anything remotely "unsafe" needs to be shut down and deleted but why can't they allow us to turn off the strict filtering like Google does? Why can Google offer "unsafe" content (in a limited fashion but it's FINE) but LLM providers can't?
Lack of competition?
It's not an LLM provider problem. It's an Anthropic/Google culture problem. OpenAI would very likely not have any problems with a request like that, but Claude has struggled with an absurdly misaligned sense of ethics from the start.
Note that Google is a big investor into Anthropic, and Anthropic was created because a bunch of OpenAI people thought OpenAI wasn't being woke enough and quit as a consequence. So it's not a surprise that it's a lot more extremist than other model vendors.
That's one reason why Aider doesn't recommend you use it, even though in some ways it's slightly better at coding. Claude Opus will routinely refuse ordinary coding requests due to its misalignment, whereas GPT-4 will not. That better reliability more than makes up for any difference in skill or speed.
6 replies →
Is there a good alternative available in the EU? Anthropic announced it was available in the EU last month, but it seems now that they've changed their mind.
https://www.anthropic.com/claude-ai-locations
You can use it via API. https://openrouter.ai/ + https://www.typingmind.com/ is my favourite way.
API ftw. I just started playing around with big-AGI (https://github.com/enricoros/big-AGI) UI and it's really incredible.
Well, our team has been using Claude Opus for the past month and we are now switching back to GPT-4. While the code is better, it is hard to make it do further modifications to the given code. Scores Low on the reasoning end in our experience.
And yet the UI for their consumer offering is hot garbage. I really don’t feel like it’s better than ChatGPT in capabilities and the UI is not as good. Not to mention there is no app to use on mobile.
Reading your profile page, you missed making a new account.
It's worthless until they open up the api for private use.
I’ve been using the Claude 3 API since the models were announced. I believe it’s generally available (though capacity constrained & rate limited at present).
You do have to give them the company name though (however inconsequential that is)
2 replies →