I find the arguments about the quality of AI work being the reason for not buying it unconvincing. If folks are buying it, then obviously the quality was good enough for their purposes, and who am I to say they are wrong? To me, this is the same argument that has long been made about automated or mass-produced work, and the fact that everyone in this forum is participating via their largely robot-produced machines tells me that none of us actually has significant reservations about the potential quality of such products. With apologies to those who are posting via their entirely bespoke computers and chips.
So the qualitative argument is a red herring, IMO. Either the quality is sufficient, or it isn't, and as I posted earlier, the proof of the pudding will be in the eating.
So then there are ethical concerns. I find these more compelling, but they vary.
There is an argument that generative AI costs jobs. This is true. Automation costing people jobs is not new, though, so really this comes down to our fears that a new kind of job is at risk. These robots aren't coming for the assembly line jobs (which had already come for the cottage industry jobs), now they are coming for the office jobs, and the art studio jobs.
In the past, automation has led to greater productivity and employment in the long run, but not without massive displacement and hardship, sometimes to entire regions. So this is a real problem. And it is already happening. If history is any guide, we can't stop it, because it turns out that people want the stuff automation gives us (see above). So instead of complaining about it, we should be actively engaged in doing better this time and preparing for it. We probably won't, because complaining is a lot easier and cheaper (in the short term) than changing our taxation and government policies on a wide scale, so likely we'll once again wait for the consequences to unfold, often tragically, and then belatedly react. People will suffer.
There are arguments about the ownership of ideas, and how AI might challenge our existing notions and laws. Some of these get into areas that are difficult to assess because we don't fully understand how either humans or generative AIs produce what they produce. I find the argument that generative AI doesn't create but only replicates unconvincing, because I have used it, and while it builds on what has already been made, it definitely creates new things that haven't existed in that particular arrangement before (see our lengthy thread on Dall-E art). But...that's kind of what humans do, too.
I think we are in the middle of a technological revolution, and we are reacting with fear, which makes sense because the consequences are unknown but will likely be bad for some of us (possibly including me; I'm a teacher). I also think we are romanticizing the familiar, which is what humans always do when the status quo is threatened. In looking at the big picture, I think we will continue to progress. But people are going to be hurt, and we should focus on helping prevent as much of that as possible.