This a total tangent but I always thought it might be interesting if an IDE pulled data from source control (and presented it inline somehow).
How "hot" a function is - has it reiceved a lot of edits in the recent past? Or is cold and old (and therefore you'd imagine reasonably bug-free for most use cases).
I think the number of unqiue people editing a portion of code would also be interesting to know. As you might expect multiple authors to have different beliefs about the point of the code and therefore the code is probably going to be more muddy and less clear.
Forget IDE, an interpreter could be interactive, i.e. a language itself could require an online connection to a service, let you type things that are ambiguous, take a guess (or ask you) and then record your intent interactively, while sending back to the server what you ended up meaning.
Then you could finally type something like "double k" and have it make sure you meant set k equal to k times two. (yes, of course.) but maybe one guy means append k to k as a string, that one guy in a hundred can pick that.
Today, thousands of people a day go through Google as part of this process. Why should Google be part of my actual workflow experience, without being part of the IDE experience or, well, language - or using people's feedback in any way? It doesn't make sense.
Yes, I think something like this is the future of programming. It goes even further than what you're suggesting. The codebase will no longer need to be unambiguous, among other things.
I guess a bit more like wolfram alpha's disambiguation (though that's not interactive) or Google's suggestions (did you mean...) but more of a dialog than "take it or leave it this is my suggestion".
On kite... So I'm not familiar with kite but yes, "the language" should be much broader, way too broad to include in a static download. Take my example: every English-speaking programmer knows what it means to double something but no programming language I know includes a function called double() which means to set equal to itself * 2 (because including such a thing in a downloadable language is a stupid idea to do). On the other hand it's not a stupid idea to do so interactively in a connected "language as a service", having it offer and keep track of the fact that people usually end up wanting you to set it to the value * 2 (90% of the time).
You can test that this would work. Write some pseudo code to accomplish something.
Next, Google each line you've written, along with the name of a language. See? It works. You can change the pseudocode to code in a language interactively.
For choices the programmer has to make, the environment could ask them then and there.
This only works when a language is "live" or online, and keeps track of what people end up meaning. Maybe I shouldn't call it "language" but something else... Yet at the end of the day if you're writing code, answering some questions and moving on, it's close to a language, isn't it?
You have to weigh the ease-of-access vs. IDE clutter though.
For example I see people using linters that update automatically as you type (I think on Atom there isn't even an option to disable this if the packages are enabled) - which is frankly quite awful, when I'm writing the code I care a lot more about my thought process than being distracted by style complaints, there is a limited amount of context you can keep in mind at once, and subsidiary things shouldn't be actively wasting your time.
At least for me git log -L<start>,<end>:<file> is usually preferable.
That said it shouldn't be a hard tool to build for yourself if this is what you need.
This is something that I'm actually writing for my own programming language, coincidentally also written in Rust.
My idea was to label each expression with these flags, i.e. is the expression constant?, tail recursive?, etc., and then make that information available for the text editor and other tooling, so the user can instantly see certain things about their program, and see the type of optimizations the compiler will do for them.
Sounds like it could be a basis for a security oriented scanner. Rust has less to worry about regarding memory security, but performing taint analysis for logical security flaws could be mighty interesting.
I'm personally very interested in "non-optimizing optimizers", essentially lint passes which point out places where you may optimize something without actually making that optimization. The advantage to this is that you get access to a lot of optimizations which would otherwise require you to make a lot of things undefined behavior to make work.
An example of this is the escape analysis lint in rust-clippy, which can detect if an allocation via Box/Vec/String is unnecessary (though right now it only supports one of those and I need to work on it more) and tell you how to remove it.
By the way, I recently tried to get rid of some `vec!`s in the benchmarksgame entries (well, to see if I could), but got into trouble with IntoIterator implementations, so an extension of EscapeAnalysis would be great!
I suspect that the escape analysis lint will never be as powerful as a human looking in a reasonably small region of code to remove specific allocations, so it wouldn't be able to help here. Probably. Its power is in that it can find unnecessary allocations in a large codebase.
It's half a lint. It collects metadata about the code, making it easier to do whole-program analysis. Most of the lints we have so far do at most whole-function analysis. The other half, to be written, would use the metadata to lint things. "Can never panic" is one such lint that could be written with this for example.
As an optimization pass it would have to actually change the generated code. It doesn't affect that (nor does Rust's plugin lint functionality; which it uses; let you do that)
This sounds like a component of the type of tool normally called a static analysis engine. "Lint" to me implies more of a lightweight tool that complains about tabs vs spaces and things like that.
I just call it a lint because it uses the rustic::lint API. And I posited that such analysis belongs in the compiler.
However, lints no longer need to be lightweight; I've seen full-program analysis passes called lints. And in rust, lints usually don't complain about tabs/spaces (though we could write one), but rather certain idioms which are exacerbating readability, flexibility, performance or any combination of the above.
The general term lint refers to things that catch possible bugs. http://github.com/manishearth/rust-clippy/ contains tons of lints which are much more powerful than just tabs vs spaces. Many catch code style things, but some of them also catch possible bugs.
The example I gave of "does not panic" isn't really a lint, more of a restriction -- a static analysis pass that helps catch things which might not usually be a problem, but are a problem for your specific use case (so off by default). But there are lints as well which could benefit from the nsa lint backend.
Things that complain about tabs vs spaces and other code layout concerns are style checkers. Linters complain about the actual content of the program, and enforce certain behavior. For example, warning you about unused functions, parameters, and variables, warning on duplicate fields in JavaScript objects, complaining about using parameter lists in C, etc.
It’s not about the optimising compiler, but rather about suggestions on more advanced refactorings and things like that. It’s a pass for the linter which produces the information that can be used to produce optimisation lints.
How "hot" a function is - has it reiceved a lot of edits in the recent past? Or is cold and old (and therefore you'd imagine reasonably bug-free for most use cases).
I think the number of unqiue people editing a portion of code would also be interesting to know. As you might expect multiple authors to have different beliefs about the point of the code and therefore the code is probably going to be more muddy and less clear.