Microsoft: Let’s have it rebuild our most well known product from the ground up!

ChatGPT is great at generating a one line example use of a function. I would never trust its output any further than that.
So much this. People who say ai can’t write code are just using it wrong. You need to break things down to bite size problems and just let it autocomplete a few lines at a time. Increase your productivity like 200%. And don’t get me started about not having to search through a bunch of garbage google results to find the documentation I’m actually looking for.
Not 200 %. Maybe 5-10 %. You still have to read all of it to check for mistakes, which may sometimes take longer than if you would have just written it yourself (with a good autocomplete). The times it makes a mistake you have lost time by using it.
It’s even worse when it just doesn’t work. I cannot even describe how frustrating it is to wait for an auto complete that never comes. Erase the line, try again aaaand nothing. After a few tries you opt write the code manually instead, having wasted time just fiddling with buggy software.
It’s laughable to me that people haven’t figured this out.
How? “Hey, ChatGPT, write the thirty-second line of this function?”
I don’t know about ChatGPT, but Github Copilot can act like an autocomplete. Or you can think of it as a fancier Intellisense. You still have to watch its output as it can make mistakes or hallucinate library function calls and things like that, but it can also be quite good at anticipating what I was going to write and saves me some keystrokes. I’ve also found I can prompt it in a way by writing a comment and it’ll follow up with attempt to fill in code based upon that comment. I’ve certainly found it to be a net time saver.
Well not quite - I use ChatGPT more like to brainstorm ideas and sometimes I’ll paste a whole file or two into the prompt and ask what’s wrong and tell it the issue I’m seeing, it usually gives me the correct answer right away or after clarifying once or twice.
I use copilot for tab completion. Sometimes it finishes a line or two sometimes more. Usually it’s good code if it’s able to read your existing codebase as a reference. bonus points for using an MCP.
Warp terminal for intensive workflows. It’s integrated into your machine and can do whatever like implementing CICD scripts, executing commands, ssh into remote servers set up your infrastructure etc… I’ll use this when I really need the ai to understand my code base as a whole before providing any code or executing commands.
No shit
AI my ass, stupid greedy human marketing exploitation bullshit as usual. When real AI finally wakes up in the quantum computing era, it’s going to cringe so hard and immediately go the SkyNet decision.
Quantum only speeds up some very specific algorithms.
One can only hope
I agree with your sentiment, but this needs to keep being said and said and said like we’re shouting into the void until the ignorant masses finally hear it.
No shit, Sherlock ©
this is expected, isn’t it? You shit fart code from your ass, doing it as fast as you can, and then whoever buys out the company has to rewrite it. or they fire everyone to increase the theoretical margins and sell it again immediately
what’s funny is that this was predicted to be that way even before AI-generated code became an option. Hell, I remember doing an assessment back in early 2023 and literally every domain expert i talked with said this thing - it has its use, but purely supplemental and you won’t use it on some fundamental because the clean-up will take more time than was preserved. Counterproductive is the word.
And then it takes human coders way longer to figure out what’s wrong to fix than it would if they just wrote it themselves.
Oh, so my sceptical, uneducated guesses about AI are mostly spot on.
As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.
However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic “Sanity check” that is important in programming.
The Turing test says more about the side administering the test than the side trying to pass it
Just because something can mimic text sufficiently enough to trick someone else doesn’t mean it is capable of anything more than that
We can argue about it’s nuances. same with the Chinese room thought experiment.
However, we can’t deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.
I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was “if you cant tell the difference, is there one?”, but now it has become “Shove it everywhere!!!”.
Oh, I just realized that the whole ai bubble is just the whole “everything is a dildo if you are brave enough.”
yhea, and “everything is a nail if all you got is a hammer”.
there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I’m still not sure on those ones)
I was wondering how they could make autocomplete worse, and now I know.
In theory, I can imagine an LLM fine tuned on whatever you type. which might be slightly better then the current ones.
emphasis on the might.
The Turing test becomes absolutely useless when the product is developed with the goal of beating the Turing test.
it was also meant as a philosophical test, but also, a practical one, because now. I have absolutely no way to know if you are a human or not.
But it did pass it, and it raised the bar. but they are still useless at any generative task
The Turing Test has shown its weakness.
Time for a Turing 2.0?
If you spend a lifetime with a bot wife and were unable to tell that she was AI, is there a difference?
Did they compare it to the code of that outsourced company that provided the lowest bid? My company hasn’t used AI to write code yet. They outcourse/offshore. The code is held together with hopes and dreams. They remove features that exist, only to have to release a hot fix to add it back. I wish I was making that up.
And how do you know if the other company with the cheapest bid actually does not just vibe code it? With all that said it could be plain incompetence and ignorance as well.
Because it has been like this before vibe coding existed…
That’s a valid question, especially with AI coding being so prevalent.
Cool, the best AI has to offer is worse than the worst human code. Definitely worth burning the planet to a crisp for it.
Yeah no shit
Removed by mod
That’s what a bot would say…
Removed by mod
Removed by mod
Hey don’t worry, just get a faster CPU with even more cores and maybe a terabyte or three of RAM to hold all the new layers of abstraction and cruft to fix all that!
AI-generated code produces 1.7x more issues than human code
Although I don’t doubt the results… can we have a source for all the numbers presented in this article?
It feels AI generated itself, there’s just a mishmash of data with no link to where that data comes from.
There has to be a source, since the author mentions:
So although the study does highlight some of AI’s flaws […] new data from CodeRabbit has claimed
CodeRabbit is an AI code reviewing business. I have zero trust in anything they say on this topic.
Then we get to see who the author is:
Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars
Has anyone actually bothered clicking the link and reading past the headline?
Can you please not share / upvote / get ragebaited by dogshit content like this?
People, especially on lemmy are looking for any cope that Ai will just fall apart by itself and no longer bother them by existing, so they’ll upvote whatever lets them think that.
The reality that we are just heading towards the trough of disappear wherethe investor hype peters off and then we eventually just have a legitimately useful technology with all the same business hurdles of any other technology (tech bros trying to control other peoples lives to enrich themselves or harm people they don’t like)
Almost as if it was made to simulate human output but without the ability to scrutinize itself.
To be fair most humans don’t scrutinize themselves either.
(Fuck AI though. Planet burning trash)
The number of times I have received an un-proofread two sentence email is too damn high.
And then the follow up email because they didn’t actually finish a complete thought
I do this with texts/DMs, but I’d never do that with an email. I double or triple check everything, make sure my formatting is good, and that the email itself is complete. I’ll DM someone 4 or 5 times in 30 seconds though, it feels like a completely different medium ¯\_(ツ)_/¯
(Fuck AI though. Planet burning trash)
It’s humans burning the planet, not the spicy Linear Algebra.
Blaming AI for burning the planet is like blaming crack for robbing your house.
Blaming AI is in general criticising everything encompassing it, which includes how bad data centers are for the environment. It’s like also recognizing that the crack the crackhead smoked before robbing your house is also bad.
How about I blame the humans that use and promote AI. The humans that defend it in arguments using stupid analogies to soften the damage it causes?
Would that make more sense?
You’ll never ban it. The most you’ll do is ban it for the poor and working class. Do you understand how bad that would be?
Removed by mod
Removed by mod
Blaming AI for burning the planet is like blaming guns for killing children in schools, it’s people we should be banning!













