

I highly doubt that. For so many reasons. Here’s just a few:
- What data would you train it on, the Constitution? The entirely of federal law? How would that work? Knowing how ridiculous textualism is even when done by humans, do you really think a non-thinking algorithm could understand the intention behind the words? Or even what laws, rules, or norms should be respected in each unique situation?
- We don’t know why LLMs return the responses they return. This would be hugely problematic for understanding its directions.
- If an LLM doesn’t know an answer, instead of saying so it will usually just make something up. Plenty of people do this too, but I’m not sure why we should trust an algorithm’s hallucinations over a human’s bullshit.
- How would you ensure the integrity of the prompt engineer’s prompts? Would there be oversight? Could the LLM’s “decisions” be reversed?
- How could you hold an LLM accountable for the inevitable harm it causes? People will undoubtedly die for one reason or another based on the LLM’s “decisions.” Would you delete the model? Retrain it? How would you prevent it from making the same mistake again?
I don’t mean this as an attack on you, but I think you trust the implementation of LLMs way more than they deserve. These are unfinished products. They have some limited potential, but should by no means have any power or control over our lives. Have they really shown you they should be trusted with this kind of power?
It isn’t just you and me. Not even the people who designed them fully understand why they give the responses they give. It’s a well-known problem. Our understanding is definitely improving over time, but we still don’t fully know how they do it.
Here’s the latest exploration of this topic I could find.