Doctors unions also have strictly limited the number of seats in medical schools in order to limit the number of new doctors and keep their pay extremely high.
Hey @rscho
I saw a comment of yours from a few months ago on this site and I want to get a hold of you, I didn't know a better way than replying to your latest comment.
Can you message me on discord? username: tag_durden
If you have a better contact method let me know i just didnt wanna put my email out here.
That time would only be enough to read the overview, go through the examples, click on the "uniquely customizable" link hoping to see the answer and then be flooded with the garbage digital tokens of the scientific world - 100+ references...
True. If you are new to Lisps it will require substantially more than 5 minutes to learn about the different types of macros. For Racket in general you might start here.[0][1] For Rhombus probably here.[2][3]
I'm not clear what your asking about regarding the ellipses though.
I don't think they care so much as you think about 'takeoff'. It's mostly a research project in programming pedagogy. They're trying to make lisp features more accessible to students in universities. A takeoff would sure be appreciated, but the immediate goal is not a takeover of the industry.
The author (M Flatt) is an incredibly gifted and productive programmer, btw.
Well, this is clearly an attempt at abstracting the kind of low-level stuff you describe. Perhaps it doesn't work (yet), but that shouldn't prevent people from trying ? Involving a SMT solver suggests that the solver is doing the heavy-lifting, not python. PhDs often produce inapplicable stuff, but they are also the basis for industry/application R&D, such as what your org is doing... PhDs are the slaves of science. They make stuff happen for peanuts in return and deserve our respect for that, even if what happens is oftentimes a dead-end. It's really sad seeing people shitting on PhDs.
Unfortunately, the comment you are responding to is more correct on this than I think you are. The python thing was stupid, though - a lot of high-performance code gets written in libraries like numpy (that call C 99.99% of the time) or pytorch (JIT-ed before executing) that keep Python our of the critical path.
The problem with this research is that many similar ideas have been tried before in the contexts of supercomputers or CPU compilers. They ultimately all fail because they end up (1) being more work to program in, and (2) not being any faster because real life happens. Networks drop packets, clock frequencies jitter, and all sorts of non-determinism happens when you have large scale. A static scheduler forces you to stall the whole program for any of these faults. All the gain you got by painstakingly optimizing things goes away.
PhD theses, in a best-case scenario, are the basis for new applications. Most of them amount to nothing. This one belongs on that pile. The sad part about that is that it isn't the student's fault that the professor sent them down a research direction that is guaranteed to amount to nothing of use. This one is on the professor.
I don't think the paper is about statically scheduled architectures. In fact they mention it's for modern accelerators. These switch between threads in a dynamic way rather than stalling. The scheduling being referred to seemed to mean the order in which instructions should be fed to a potentially dynamic scheduler to enable efficient usage of caches etc.
So I'm not sure you can dismiss it as a thesis which will amount to nothing on the basis that static scheduling is a bad idea!
I could easily have missed something though. It's not a particularly clear or succinct write-up and I have only read some of it. If it does say that it only works for strictly deterministic in-order architectures somewhere please can you point out where?
It's me who hadn't read all of the materials. But infact I think we both agree about what the tool does, which is scheduling an instruction stream ahead of time.
I'm confused because that approach seems identical to all mainstream compilers I know of. GCC/Clang also schedule instructions statically and the right schedule will improve performance. Why won't it work here? What kind of dynamic scheduling do you think it needs in order to be useful? Like a tracing JIT? Or they need to implement the reordering in hardware and reschedule as instructions execute?
The issue is that it takes manual programmer effort for no gain over what a compiler gives you.
The selling point of most of these tools is that you can be smarter than the compiler, and this one (despite the example) is selling static scheduling of compute kernels, too. That would normally be up to the OS/driver with some flexibility.
Most CUDA kernels are hand tuned anyway so it's not clear this is a lot more effort for the programmer. Most compilers can't perform these kind of loop transformation and tiling optimizations for you. The CUDA compiler certainly doesn't do this. So it is actually both possible and very worthwhile to try and be smarter than the compiler in this case.
In terms of scheduling compute kernels, the project doesn't remove the ability of the OS/driver to schedule execution. It's only affecting the instruction sequence within the kernel, which is something OS/drivers don't typically control. They retain full control over when the kernel is executed by the hardware.
(PTX is sometimes compiled to SAAS by the Nvidia driver rather than ahead of time. That does allow the driver to reschedule instructions, but using this project doesn't prevent it from doing that. The driver will compile PTX emitted from this project in the same way as any other)
Yes, micro-optimization with manual instruction scheduling is usually a great idea. No, macro-optimization with manual instruction scheduling is usually a bad idea. That it "doesn't remove" something isn't an argument that this is a good idea when the selling point is that it enables removal.
The "novel" idea here is the macro-optimization, with some overtures about making the micro-optimization easier - as you likely know, this is not true since the complexity here is more about understanding how to do better, not about what language features you use to make those changes.
"Oh, I emptied your bank account here, let me change this."
For AI to really replace most workers like some people would like to see, there are plenty of situations where hallucinations are a complete no-go and need fixing.
Yes, but honestly if you can use specialized software instead of writing your own, do so. Scheduling problems become very hairy quite quickly. For ASP, there is also sCASP that is the current flagship application of SWI Prolog.
The big advantage of prolog vs solvers is the logic programming aspect. If you can express parts of your program in a logically pure way, Prolog code will be much more concise than the Z3 equivalent, although probably slower. Another advantage is that Prolog outputs and reads Prolog terms, which makes it very good for metaprogramming. It's like lisp: incredible if you're solo prototyping, not so much if you're part of a team of corporate cogs.
Yes.
> which removes the financial incentive for surgeons to do better and prevents bad surgeons from being weeeded out of the market
This is absolutely not why the 'doctors unions' fight against performance statistics. Can't you people think of nothing else than money ?
reply