Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who’s said the phrase “ai” and redistribute their wealth.
More regulation, supervised development, laws limiting training data to be consensual.
I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.
2 chicks at the same time.
TBH, it’s mostly the corporate control and misinformation/hype that’s the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people’s capacity for critical thinking.
ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights…
So yeah, uh… Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.
I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.
I would much rather see the field advanced by scientific and academic interests. Let’s focus on solving problems that help everyone instead of temporarily boosting profit margins.
I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.
Rename it to LLMs, because that’s that it is. When the hype label is gone, it won’t get shoved into everywhere for shits and giggles and be used for stuff it’s actually useful for.
I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI’s actions regarding this question in court cases.
This would hopefully lead to the devopment of better AI’s that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.
Just Mass public hangings of tech Bros.
Shut it off until they figure out how to use a reasonable amount of energy and develop serious rules around it
Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I’d love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.
I don’t love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I’m not sure it’s enough to really matter. I think we need bigger “what if” scenarios to handle that.
There’s too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who’s controlling it and who’s not.
We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else’s artwork really really fast…that is all it does, at least for the public sector currently.
For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.
AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don’t inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn’t been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it’s interesting to them.
There’s too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.
Here are further resources if you didn’t get enough ranting.
They have to pay for every copyrighted material used in the entire models whenever the AI is queried.
They are only allowed to use data that people opt into providing.
There’s no way that’s even feasible. Instead, AI models trained on pubically available data should be considered part of the public domain. So, any images that anyone can go and look at without a barrier in the way, would be fair game, but the model would be owned by the public.
Its only not feasible because it would kill AIs.
Large models have to steal everything from everyone to be baseline viable
There’s no way that’s even feasible.
It’s totally feasible, just very expensive.
Either copyright doesn’t exist in its corny form or AI companies don’t.
What about models folks run at home?
Careful, that might require a nuanced discussion that reveals the inherent evil of capitalism and neoliberalism. Better off just ensuring that wealthy corporations can monopolize the technology and abuse artists by paying them next-to-nothing for their stolen work rather than nothing at all.
Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
Failing that, any models built on stolen work should be released to the public for free.
This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone’s art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don’t care if they learned it from 10,000 people, list them.
If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.
I’m going to ask the tough question: Why?
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
Copyright law lets you download whatever TF you want. It isn’t until you distribute said copyrighted material that you violate copyright law.
Before generative AI, Google screwed around internally with all those copyrighted works in dozens of different ways. They never asked permission from any of those copyright holders.
Why is that OK but doing the same with generative AI is not? I mean, really think about it! I’m not being ridiculous here, this is a serious distinction.
If OpenAI did all the same downloading of copyrighted content as Google and screwed around with it internally to train AI then never released a service to the public would that be different?
If I’m an artist that makes paintings and someone pays me to copy someone else’s copyrighted work. That’s on me to make sure I don’t do that. It’s not really the problem of the person that hired me to do it unless they distribute the work.
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
If you believe that it’s not Xerox’s problem then you’re on the side of the AI companies. Because those companies that make LLMs available to the public aren’t actually distributing copyrighted works. They are, however, providing a tool that can do that (sort of). Just like a copier.
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
My argument is that there’s absolutely nothing illegal about it. They’re clearly not distributing copyrighted works. Not intentionally, anyway. That’s on the user. If someone constructs a prompt with the intention of copying something as closely as possible… To me, that is no different than walking up to a copier with a book. You’re using a general-purpose tool specifically to do something that’s potentially illegal.
So the real question is this: Do we treat generative AI like a copier or do we treat it like an artist?
If you’re just angry that AI is taking people’s jobs say that! Don’t beat around the bush with nonsense arguments about using works without permission… Because that’s how search engines (and many other things) work. When it comes to using copyrighted works, not everything requires consent.
Like the other comments say, LLMs (the thing you’re calling AI) don’t think. They aren’t intelligent. If I steal other people’s work and copy pieces of it and distribute it as if I made it, that’s wrong. That’s all LLMs are doing. They aren’t “being inspired” or anything like that. That requires thought. They are copying data and creating outputs based on weights that tell it how and where to put copied material.
I think the largest issue is people hearing the term “AI” and taking it at face value. There’s no intelligence, only an algorithm. It’s a convoluted algorithm that is hard to tell what going on just by looking at it, but it is an algorithm. There are no thoughts, only weights that are trained on data to generate predictable outputs based on given inputs. If I write an algorithm that steals art and reorganizes into unique pieces, that’s still stealing their art.
For a current example, the stuff going on with Marathon is pretty universally agreed upon to be bad and wrong. However, you’re arguing if it was an LLM that copied the artist’s work into their product it would be fine. That doesn’t seem reasonable, does it?
If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?
I think this is intentionally missing the point.
LLMs don’t actually think, or produce original ideas. If the human artist produces a work that too closely resembles a copyrighted work, then they will be subject to those laws. LLMs are not capable of producing new works, by definition they are 100% derivative. But their methods in doing so intentionally obfuscate attribution and allow anyone to flood a space with works that require actual humans to identify the copyright violations.
Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.
No they don’t. They index the content of the page and score its relevance and reliability, and still provide the end user with the actual original information
However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?
This is false equivalence
LLMs do not wholesale reproduce an original work in it’s original form, they make it easy to mass produce a slightly altered form without any way to identify the original attribution.