I don’t disagree, but this is an issue of when/where it’s appropriate to use an LLM to interact with customers and when they shouldn’t. If you present an LLM to the public, it will get manipulated by people who are prepared to in order to get it to do something it shouldn’t.
This also happens with human employees, but it’s generally harder to do so it’s less common. This sort of behaviour is called social engineering and is used by fraudsters and scammers to get people to do what they want, typically handing over their bank details, but the principal is the same, you’re manipulating someone (or something in this case) into getting it do do something they/it shouldn’t.
Just because we don’t like the fact that the business owner deployed an LLM in a manner they probably shouldn’t have, doesn’t mean the customer isn’t the one in the wrong and themself voided whatever contract they had through their actions. Whether it’s a human or LLM on the other end of the chat doesn’t actually make any difference.







Again, I agree, we are in total agreement about the principal of whether a business should be held accountable for their LLMs actions. We’re in “Fuck AI”!
I was merely pointing out in my original comment that there are absolutely grounds not to honour a contract where a customer has acted in bad faith to get an offer.