

The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the instructions down into small steps and tell it to show me its pr.ogress.
I’d be very interested to learn it’s “thought process” in each of those scenarios.
Profits over productivity. If replacing people with AI, as impractical as it may be, leads to higher profits then CEOs have an obligation to do so. Poverty, sickness, and homelessness are none of their fucking concern.