Assume mainstream adoption as used by around 7% of all github projects
Personally, I’d like to see Nim get that growth.
If we’re saying 7% is the bar for mainstream, then Rust is my vote.
C# is not even mainstream by that standard.
I’d also like to see Julia used more.
Haskell. I think that more people being familliar with Haskell concepts would be good for programing culture and it would increase the odds of me being able to write Haskell professionally, which is something I enjoy a lot when writing hobby code at least. Having more access to tooling and a bigger eco system would be nice as well.
I’m not a 100% sure about my answer though. For one, I might grow to resent Haskell if I had to use it at work, and there’s also a risk that it would be harder to do cool innovative stuff with the language when more big companies depend on it.
Futhark: a functional language that can be compiled to run in parallel on cpu or gpu. (No need to write cuda directly) https://futhark-lang.org
Esperanto.
Sorry to say, but once I realised how euro-centric, and to my ear/eye, latin-centric esparanto is I completely lost interest.
I don’t know if anyone has tried, but something which similarly draws influences from the languages that the vast majority of the world speak would be wonderful.
You made me think of that xkcd about standards.
Anyway, the eurocentrism argument, while perhaps true due to the Latin root, seems to be a little bit of a savior complex don’t you think? China itself pushed for Esperanto to be used as a business language internally late last century as I recall.
savior complex
I don’t see that at all.
It’s about making a language that the maximum amount of cultures can see themselves in, can have at least some familiarity with, and feel like they’ve been acknowledged in the making of a global language … all of which is intended to get maximum buy in around the world to establish a truely international language rather than a Lingua Franca derived from hegemony.
Maybe China was interested in Esperanto for a bit, but I’m betting like most stories like that it’s heavily exaggerated or outright bogus.
Someone already said that either the created language takes from too few source languages and alienates speakers of languages with no common characteristics or takes from every language family and becomes a horrible mess that’s hard to speak for everyone.
So if a world language is a bad idea no matter what languages you use as a source, why not have Esperanto or something similar for Europe/English speaking world and then a different language for Asia, and another one for Africa. You’ve reduced the number of translators needed and left most people with a language close to their mother tongue. You could also break the suggested regions in to smaller sections eg give Germanic Europe a common Germanic language. West/south Europe get Esperanto, east Europe sets a common slavic language. You still get languages that don’t neatly fit like Hungarian but its better for most language learners than the last example.
Personally I’d not propose universal languages as a utopian idea and instead promote indigenous languages such as Catalonian, Breton, Irish and promote learning many languages in a post work society.
Yeah we can invent yet another language, and go through the motions of including everyone. But by god make sure you don’t forget anyone. Let’s throw in Chamicuro, Warlpiri, Liki, Tanema, Ongota, and Dumi, just to make sure. Don’t want to upset anyone….
Or we could stop inventing new ways to accuse things of not being inclusive enough. It’s getting bonkers… Not saying Esperanto is the best language, and it has its flaws as others have so vehemently stated, but if inclusivity is the primary motive when designing a language, then I can almost certainly guarantee that new language will be much worse.
I mean English is basically the world language. It’s used by pilots, scientists, global finance, and diplomatic efforts. I’m gonna assume that almost no one would classify English as inclusive in its vocabulary. Unless you’re German, Dutch, or French of course. Esperanto is at least more accessible and easy to learn and carries Latin roots… shared with lots of languages. And it was invented by a member of a repressed minority in the old Russian Empire. What’s not to love?
My problem is not with inclusivity but with promoting uptake. If you are familiar with the grammar or phonetic sounds or some of the vocab, you are more likely to find that language easier to learn.
Both English and Esperanto share the same problems of universal languages that I mentioned. English does have the advantage of number of speakers but it is a mess of a language for people to have to learn.
Again to reiterate my counter to universal languages, why not learn and potentially help revive your local indigenous languages. In a world where universal translation exits on our phones everybody being able to speak the same language matters less.
R
I know is not considered a “proper” programming language by some, but I’ve been working with it for years for scientific data analysis and I love it
{-# LANGUAGE OverloadedStrings #-} import qualified Data.Text as T (Text) correctAnswer :: T.Text correctAnswer = "Haskell"
deleted by creator
Malbolge
Some LISP going mainstream woulb be great!
Zig hasn’t been mentioned yet, so I’m just going to drop that here.
I personally have enjoyed the meta-programming, the ease of integrating with C libraries, and like that it’s pretty straight-forward to compile.
Elixir… please I want an Elixir job
The most beautiful language. Why doesn’t every language have pipes?
deleted by creator
What could be the “killer app” for Crystal is an equivalent of Rails, since its syntax attempts to be very similar to Ruby. Even supposing it maintains all of Rails’ inefficiencies, if it “just works” and has a very small learning curve for RoR veterans, adoption could grow steadily
deleted by creator
I mentioned it in a reply but it deserves its own top-level answer.
Rust! Memory leak free code would make our world a better place!
Rust doesn’t guarantee the lack of memory leaks anymore then java/C++ does, so sadly not sure if it would help here. :)
Help me understand your point of view. How does Rust not prevent memory leaks?
There’s built in functions to leak memory that are perfectly safe. You can also do one really trivially by making a reference count cycle. https://doc.rust-lang.org/book/ch15-06-reference-cycles.html
Rust only prevents memory unsafety - and memory leaks are perfectly safe. It’s use after frees, double frees, etc. It prevents.
You are absolutely correct that rusts safety features don’t extend to memory leaks, but it’s still better than most garbage collected languages unless you abuse Rc or something, and it does give you quite fine-grained controll over lifetimes, copying and allocations on the heap which in practice means that rust is fairly good about memory leakages compared to most languages.
Reference counting is a GC though ?
It’s a bad one sure and will leak memory in cases of a cycle which most tracing GC are able to do.
It’s main advantage is that there are no GC pauses.
I think you know what I mean when I contrast Rust with GC’d languages, we can call it opt-in garbage collection if we’re being pedantic.
How would rust fare any better then a tracing GC? Realistically I’d expect them to use more memory, and also have worse determinism in memory management - but I fail to really see a case where rust would prevent memory leaks and GC languages wouldn’t.
If you just Rc everything (which I’d count as “abusing Rc”) Rust is significantly worse than a language with a good GC. The good thing about Rust is that it forces you to aknowledge and consider the lifetimes of objects. By default things are allocated on the stack, but if you make something global or dynamically handled (e.g. through Rc) you have to do so explicitly. In Rust the compiler has greater compile time information about when things can be freed which means that you need less runtime overhead to check things and if you want to minimize the amount of potentially long-lived objects you can more easily see how long objects might live by reading the code as well as get help by the compiler to determine if a lifetime-based refactoring is sound or not.
Many here will hate this, but I am looking forward to just using English language to program and let AI handle all the minutia.
I don’t know why you’re being downvoted, but this could truly be the future of programming languages. We don’t have to manually compile everything to assembly today, do we? Imagine simply using English for pseudocode, with an AI compiler that writes the most performant code… How much would that speed up development time? Noone would need to know different languages… The learning curve for programming relatively basic shit would be low.
I dunno, but I’ve seen a lot of unecessary hate for AI in the left leaning communities…
Syntax has never really be an issue. The closest thing to plain english programming are legal documents and contracts. As you can see they are horrible to understand but that the only way to correctly specify exactly what you want. And code is much better at it. Another datapoint are visual languages like lego mindstorm or LabView. It’s quite easy to do basic things, but it doesn’t scale at all.
Syntax has never really be an issue.
But it has tho… For example, I do not know rust. I want to add the notifications functionality to Lemmy. Lemmy is in rust. To implement this relatively simply api, I need to learn rust to a degree. Then, I need to look at Lemmy’s file structure to understand the project further to actually do what I want to do. What if this all could be abstracted by me simply saying “post xyz to the expo-notifications server whenever someone messages someone.” An AI English-to-rust interpreter could easily do this.
The closest thing to plain english programming are legal documents and contracts. As you can see they are horrible to understand but that the only way to correctly specify exactly what you want.
This is what would define the smartness of the AI, wouldn’t it? Your project manager doesn’t tell you exactly what they want. You have the brains to interpret what they mean and do stuff accordingly, correct?
This requires many assumptions that you or any computational system have no formal reason to make. Having an interpreter that just guesstimates exactly how you want the program structured, is going to run into problems when you, say want to extend the program.
“Hi computer! Write me a program that make money. I must just run it and I become rich.”
Computer makes a blockchain. “Here you go! Keep running it in the background then just sell the mined coins!”