Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
if AI is answering, yes.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I’m not sure why you need me to validate your point
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it.
And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?
You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length
What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
not what I said at all. I simply stated AI answers cannot be trusted without verifying them which makes them a lot less useful
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I’m not sure why you need me to validate your point
And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?
What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?
And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time