I have a couple preliminary thoughts:
- Prioritize human readability: With LLMs progressively taking over more of the coding responsibility, and developers being relegated to more code reviewing, we might as well make life easier. It’s ok for syntax to be verbose since it comes at no cost (to LLMs, anyway). Information dense code is fun to write but not so much to read. Perhaps something close to pseudocode.
- Ability to self verify: Reduce the burden on the developer of validating that the produced code is correct and not riddled with hallucinations. Code that compiles is a good baseline. Then we have static analysis like what Rust does. Then comes formal verification and being able to prove that the logic meets the spec.
I’m also curious how well LLMs will be able to write code in such an ideal language, given that they haven’t been trained on such.
Python - AI/ML development
Typescript - Web-based LLM apps
Rust - High-performance inference
Clojure - Functional orchestration
Julia - Scientific AI
It's a bit surprising to see Clojure in this list, but I would say everything else is the usual suspects.
The LLMs will decide on python obviously.