> A spokesperson for the mayor, Dora Pekec, confirmed in a text message that the new administration plans to take down the chatbot. She said a member of the Mamdani transition team had seen reporting on the bot from The Markup and THE CITY and presented it to the mayor as a possible place to save funds.
Journalism teed up an easy way for an incoming politician to dunk on his predecessor, if you'll forgive the mixed metaphor. Not that I'm opposed to any part of it, just that this was an easy scenario for "journalism" to "work" in.
If you'd like other examples, 404media and adjacent journalism grinding against Flock across the country, as well as perfectunion working against datacenter siting. I admit the egregious nature of the Adams NYC administration and his fraud makes this particular scenario straightforward.
Why did NYC release it in the first place? Did they not QA it?
Or was it perhaps one of those cases where they found issues, but the only way to really know for sure that the deleterious impact is significant enough by pushing it to prod?
The same way you test any system - you find a sampling of test subjects, have them interact with the system and then evaluate those interactions. No system is guaranteed to never fail, it's all about degree of effectiveness and resilience.
The thing is (and maybe this is what parent meant by non-determinism, in which case I agree it's a problem), in this brave new technological use-case, the space of possible interactions dwarfs anything machines have dealt with before. And it seems inevitable that the space of possible misunderstandings which can arise during these interactions will balloon similarly. Simply because of the radically different nature of our AI interlocutor, compared to what (actually, who) we're used to interacting with in this world of representation and human life situations.
> Why did NYC release it in the first place? Did they not QA it?
Considering Louis Rossmann's videos on his adventures with NYC bureaucracy (e.g. [0]), the QAers might not have known the laws any better than the chat bot.
Remember that many people are heavily are happy-path biased. They see a good result once and say "that's it, ship it!"
I'm sure they QA'd it, but QA was probably "does this give me good results" (almost certainly 'yes' with an LLM), not "does this consistently not give me bad results".
LLMs can handle search because search is intentionally garbage now and because they can absorb that into their training set.
Asking highly specific questions about NYC governance, which can change daily, is almost certainly 'not' going to give you good results with an LLM. The technology is not well suited to this particular problem.
Meanwhile if an LLM actually did give you good results it's an indication that the city is so bad at publishing information that citizens cannot rightfully discover it on their own. This is a fundamental problem and should be solved instead of layering a $600k barely working "chat bot" on top the mess.
The chatbot was released under the Eric Adams administration. The same Eric Adams, as soon as his term finished, went to Dubai and launched a cryptocurrency.
I think he is simply not very bright, and got mesmerized by all the shiny promises AI and crypto makes without the slightest understanding of how it actually works. I do not understand how he got into office in the first place.
QA efforts can whack-a-mole some issues, but the mismatch of problem and solution is inherent in any situation in which a generator of plausible-sounding text gets pointed at an area where correctness matters.
It’s an LLM. The dirty little secret of LLMs is that they cannot be used for anything important, unless the output is checked by an expert (which typically rather defeats the purpose).
For the image. There is no way a red team can find all the issues in 6 months. They can find some of the biggest, but even getting all the issues fixed in 6 months seems unlikely.
It was implemented by our scammy, grifting, Republican in a Democratic lawmaker suit former mayor Eric Adams who should probably be in prison but who made a deal with Trump to not be prosecuted.
Being in and around the NYC area, while also knowing plenty of small businesses, I'm glad Mamdani killed this bot. Telling bosses to steal tips from their employees is run-of-the-mill corruption and common over here. The vibe for businesses is that everyone has to be exploiting someone else or have a schtick. If you were to talk about morals, you would be ridiculed. Most lawyers wouldn't even prosecute small businesses for this. It's probably why the agent was put into production, the level of business ethics in NYC is cartoonishly evil.
In the case of stealing tips, that's wage theft and the New York State Department of Labor has zero sense of humor about that. They will definitely investigate all claims on that topic. It might be too little and too late for the individual affected, but the business will pay.
I always ask this question about these bots: is the literature the training data or is the understanding of literature the training data? Meaning, sure you trained the bot on the current rules and regulations. But does that mean the model weights contain them? Such that really is a guess at legal accuracy? Or is it trained to be a lawyer and understand the docs which sit outside the model? Every time I've asked the answer is the former, and to me that's the wrong approach. But I'm not an AI scientist so I don't know how hard my theoretically perfect solution is.
What I do know is that if it was done my way it would be pretty easy for it to do what the Google AI does. Say it's not responsible, give links for humans to fact check it. I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.
> I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.
I’ve noticed that Google does a fair job at linking to relevant sources but it’s still fairly common for it to confabulate something that source doesn’t say or even directly contradicts. It seems to hit the underlying inability to reason where if the source covers more than one thing it’s prone to taking an input “X does A while Y does B” and emitting “Y does A” or “X does A and B”. It’s a fascinating failure mode which seems to be insurmountable.
I thought Gemini just started providing citations in the last few months. Are you saying they should have beaten Google to the punch on this? As part of the $500,000 budget?
Correct. Much in the same way that videos were online before YouTube, social networks existed before Facebook, and messaging existed before WhatsApp and co, they should have understood their problem set better instead of just following the leaders. Because Gemeni is not this chatbot on steroids, it's a different problem entirely that happens to now employ the same technique.
Also, search says they did links in 2024 for the Google AI. So there's that.
> The bot, built using Microsoft’s cloud computing platform
When is the last time there was positive news involving Microsoft? This bot could've easily been on AWS or GCP but I find it hilarious that here they are, getting dragged yet again
Even if the capability of each platform was exactly the same, Microsoft cloud users skew heavily towards governments, large non-tech corporations and really anyone who you sell to using large sales teams, fancy dinners and kickbacks rather than quality of software. And the end result follows.
> The Office of Technology and Innovation spent nearly $600,000 to build out the foundations of the MyCity chatbot, which will be used for future chatbot offerings on MyCity. [0]
This was experimental tech... while I admire cities attempting to implement AI, it seems they did not spend enough tax dollars on it!
We’ll likely see a lot of these AI pet projects get axed in the coming year or two… especially things rushed out in the early phases of the AI bubble when folks were desperate to appear to be using AI.
yeah i hope the problems stay to somewhat humorous themes like convincing a car sales bot to sell you a car for $1 and not more serious issues like convincing a bot to metaphorically launch the ICBMs.
> A spokesperson for the mayor, Dora Pekec, confirmed in a text message that the new administration plans to take down the chatbot. She said a member of the Mamdani transition team had seen reporting on the bot from The Markup and THE CITY and presented it to the mayor as a possible place to save funds.
Journalism works.
Journalism teed up an easy way for an incoming politician to dunk on his predecessor, if you'll forgive the mixed metaphor. Not that I'm opposed to any part of it, just that this was an easy scenario for "journalism" to "work" in.
If you'd like other examples, 404media and adjacent journalism grinding against Flock across the country, as well as perfectunion working against datacenter siting. I admit the egregious nature of the Adams NYC administration and his fraud makes this particular scenario straightforward.
https://en.wikipedia.org/wiki/Investigations_into_the_Eric_A...
1 reply →
It does. And it works best if you elect politicians who are willing to listen.
Why did NYC release it in the first place? Did they not QA it?
Or was it perhaps one of those cases where they found issues, but the only way to really know for sure that the deleterious impact is significant enough by pushing it to prod?
>Why did NYC release it in the first place? Did they not QA it?
How do you QA black box non-deterministic system? I'm not being facetious, seriously asking.
EDIT: Formatting
The same way you test any system - you find a sampling of test subjects, have them interact with the system and then evaluate those interactions. No system is guaranteed to never fail, it's all about degree of effectiveness and resilience.
The thing is (and maybe this is what parent meant by non-determinism, in which case I agree it's a problem), in this brave new technological use-case, the space of possible interactions dwarfs anything machines have dealt with before. And it seems inevitable that the space of possible misunderstandings which can arise during these interactions will balloon similarly. Simply because of the radically different nature of our AI interlocutor, compared to what (actually, who) we're used to interacting with in this world of representation and human life situations.
10 replies →
temperature 0 and 10,000,000 mischievous prompts
> Why did NYC release it in the first place? Did they not QA it?
Considering Louis Rossmann's videos on his adventures with NYC bureaucracy (e.g. [0]), the QAers might not have known the laws any better than the chat bot.
[0] https://www.youtube.com/watch?v=yi8_9WGk3Ok
Considering the previous mayor's relationship with the law, it could be on purpose.
Remember that many people are heavily are happy-path biased. They see a good result once and say "that's it, ship it!"
I'm sure they QA'd it, but QA was probably "does this give me good results" (almost certainly 'yes' with an LLM), not "does this consistently not give me bad results".
> almost certainly 'yes' with an LLM
LLMs can handle search because search is intentionally garbage now and because they can absorb that into their training set.
Asking highly specific questions about NYC governance, which can change daily, is almost certainly 'not' going to give you good results with an LLM. The technology is not well suited to this particular problem.
Meanwhile if an LLM actually did give you good results it's an indication that the city is so bad at publishing information that citizens cannot rightfully discover it on their own. This is a fundamental problem and should be solved instead of layering a $600k barely working "chat bot" on top the mess.
2 replies →
Agreed, I just read this paper by AWS' Ahmed El-Deeb
https://dl.acm.org/doi/epdf/10.1145/3780063.3780066 (PDF loads slow....)
The chatbot was released under the Eric Adams administration. The same Eric Adams, as soon as his term finished, went to Dubai and launched a cryptocurrency.
https://apnews.com/article/eric-adams-crypto-meme-coin-942ba...
I think he is simply not very bright, and got mesmerized by all the shiny promises AI and crypto makes without the slightest understanding of how it actually works. I do not understand how he got into office in the first place.
QA efforts can whack-a-mole some issues, but the mismatch of problem and solution is inherent in any situation in which a generator of plausible-sounding text gets pointed at an area where correctness matters.
It’s an LLM. The dirty little secret of LLMs is that they cannot be used for anything important, unless the output is checked by an expert (which typically rather defeats the purpose).
There’s no amount of qa that could save this.
Have you heard of Eric Adams?
Why do you think OpenAI let a red team loose on GPT-5 for six months before releasing it to the public?
For the image. There is no way a red team can find all the issues in 6 months. They can find some of the biggest, but even getting all the issues fixed in 6 months seems unlikely.
> Why did NYC release it in the first place?
Perhaps a big fat check was involved.
Yeah… no offense, but only a person who didn't know anything about Mayor Eric Adams would ask a question like that.
Just days out of office, he made a few million off a crypto scam. Buffoonishly corrupt. https://finance.yahoo.com/news/eric-adams-promoted-memecoin-...
1 reply →
Usually it's a manila envelope.
It was implemented by our scammy, grifting, Republican in a Democratic lawmaker suit former mayor Eric Adams who should probably be in prison but who made a deal with Trump to not be prosecuted.
Being in and around the NYC area, while also knowing plenty of small businesses, I'm glad Mamdani killed this bot. Telling bosses to steal tips from their employees is run-of-the-mill corruption and common over here. The vibe for businesses is that everyone has to be exploiting someone else or have a schtick. If you were to talk about morals, you would be ridiculed. Most lawyers wouldn't even prosecute small businesses for this. It's probably why the agent was put into production, the level of business ethics in NYC is cartoonishly evil.
In the case of stealing tips, that's wage theft and the New York State Department of Labor has zero sense of humor about that. They will definitely investigate all claims on that topic. It might be too little and too late for the individual affected, but the business will pay.
I always ask this question about these bots: is the literature the training data or is the understanding of literature the training data? Meaning, sure you trained the bot on the current rules and regulations. But does that mean the model weights contain them? Such that really is a guess at legal accuracy? Or is it trained to be a lawyer and understand the docs which sit outside the model? Every time I've asked the answer is the former, and to me that's the wrong approach. But I'm not an AI scientist so I don't know how hard my theoretically perfect solution is.
What I do know is that if it was done my way it would be pretty easy for it to do what the Google AI does. Say it's not responsible, give links for humans to fact check it. I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.
> I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.
I’ve noticed that Google does a fair job at linking to relevant sources but it’s still fairly common for it to confabulate something that source doesn’t say or even directly contradicts. It seems to hit the underlying inability to reason where if the source covers more than one thing it’s prone to taking an input “X does A while Y does B” and emitting “Y does A” or “X does A and B”. It’s a fascinating failure mode which seems to be insurmountable.
> pretty easy to do what the Google AI does
I thought Gemini just started providing citations in the last few months. Are you saying they should have beaten Google to the punch on this? As part of the $500,000 budget?
Correct. Much in the same way that videos were online before YouTube, social networks existed before Facebook, and messaging existed before WhatsApp and co, they should have understood their problem set better instead of just following the leaders. Because Gemeni is not this chatbot on steroids, it's a different problem entirely that happens to now employ the same technique.
Also, search says they did links in 2024 for the Google AI. So there's that.
> The bot, built using Microsoft’s cloud computing platform
When is the last time there was positive news involving Microsoft? This bot could've easily been on AWS or GCP but I find it hilarious that here they are, getting dragged yet again
https://iet.ucdavis.edu/content/microsoft-releases-xpsp2
MS 2004
golf clap
Even if the capability of each platform was exactly the same, Microsoft cloud users skew heavily towards governments, large non-tech corporations and really anyone who you sell to using large sales teams, fancy dinners and kickbacks rather than quality of software. And the end result follows.
> The Office of Technology and Innovation spent nearly $600,000 to build out the foundations of the MyCity chatbot, which will be used for future chatbot offerings on MyCity. [0]
This was experimental tech... while I admire cities attempting to implement AI, it seems they did not spend enough tax dollars on it!
[0] https://abc7ny.com/post/ai-artificial-intelligence-eric-adam...
What else to expect from Eric Adams.
This is the only comment worth making. Virtually everything he did should be heavily audited and/or undone.
We’ll likely see a lot of these AI pet projects get axed in the coming year or two… especially things rushed out in the early phases of the AI bubble when folks were desperate to appear to be using AI.
yeah i hope the problems stay to somewhat humorous themes like convincing a car sales bot to sell you a car for $1 and not more serious issues like convincing a bot to metaphorically launch the ICBMs.
"The WOPR did a better job avoiding thermonuclear war than most humans would" is my hot take.
1 reply →
He is turning out to be a benevolent, law-abiding mayor that just happens to be communist.
What's that supposed to mean?
The previous mayors were none of these things
1 reply →
Some of it is good, some of it is bad.
1 reply →
To some, anything sufficiently resembling functioning government is indistinguishable from communism.
[flagged]
To ride NYC's free busses, you must have a two minute conversation with a chat bot. (/s)