Ad supported articles is a dead industry, Google realizes this better than anyone. People don’t go to the source anymore to answer curiosities, why would you read a whole article to answer a simple question when AI gives you the answer directly?
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article about elegance in formal wear just to get to the instructions on where to place the measuring tape on your shoulder? I MUCH prefer the AI summarized response.
Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
if AI is answering, yes.
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article…
No, but that’s not what i claimed so you can have your strawman back
Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
if AI is answering, yes.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I’m not sure why you need me to validate your point
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it.
And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?
You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length
What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time
Wouldn’t this kill their ad revenue? Which is like…most of their revenue?
Ad supported articles is a dead industry, Google realizes this better than anyone. People don’t go to the source anymore to answer curiosities, why would you read a whole article to answer a simple question when AI gives you the answer directly?
I meant why would people advertise on Google if it won’t convert to clicks anymore?
They won’t, and I’m saying Google knows that their Advertising cash cow is running out of milk.
context?, nuance?, verifying the AI slop?
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article about elegance in formal wear just to get to the instructions on where to place the measuring tape on your shoulder? I MUCH prefer the AI summarized response.
Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
if AI is answering, yes.
No, but that’s not what i claimed so you can have your strawman back
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
not what I said at all. I simply stated AI answers cannot be trusted without verifying them which makes them a lot less useful
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I’m not sure why you need me to validate your point
And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?
What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?
And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time