Hacker News new | past | comments | ask | show | jobs | submit | sinker's comments login

I've started using Emacs as a database explorer.

So, Emacs has a built-in interactive SQL mode (M-x sql-mysql / postgres / sqlite). This mode opens a SQL shell similar to what you would see in a terminal. From there, you can do your selects, inserts, updates, etc.

You can also send strings from a different buffer to your SQL shell buffer.

Now in Emacs you can very easily evaluate Lisp code to define functions, redefine functions, and execute arbitrary expressions. You can also wrap your SQL expressions inside of Lisp code. By doing so, you can take advantage of Emacs's built-in Lisp evaluation tools to interact with your SQL database.

So instead of opening a shell in your terminal, selecting a database, and writing select statements to inspect your DB, you can instead...

In Emacs, create a file called something like "sql-notebook.el". Inside that file you write Lisp expressions to execute SQL queries. To execute those queries, you move the cursor over it and just run the command `eval-last-expr` (I have this bound to Ctrl-c Ctrl-c). The results of the evaluated expressions appear in your interactive SQL buffer.

The obvious advantage of this is that you end up creating a library of often-used queries which are very convenient to execute simply by moving the cursor over the query and hitting Ctrl-c twice.

You also retain a history of these queries by virtue of them existing in a plain file, as opposed to ephemeral shell history.


hey thats pretty cool, I will try to see how that works, I didn't know it. But it sounds like a lot of work to set it up. I wanted to make something faster


It's not a lot of work, but it takes a little bit of existing Elisp knowledge. You can still evaluate plain SQL code in a buffer (sql-mode) and get most of the benefits I described, you just won't get the convenience of evaluating Lisp forms.


Kernel version 6.8 broke suspend ($ systemctl suspend) for me. I run two machines with near identical setups. I upgrade my "preview" machine first before updating my primary machine to test for defects.

If I boot from kernel version 6.5, suspend works fine. Hold shift while your machine is booting and the grub menu will allow you to select a different kernel version.


Lisp languages seem well-suited for building games. The ability to evaluate code interactively without recompilation is a huge deal for feature building, incremental development, and bug-fixing. Retaining application state between code changes seems like it would be incredibly useful. Common Lisp also appears to be a much faster language than I would have blindly assumed.

The main downside for me (in general, not just for game programming) is the clunkiness in using data structures - maps especially. But the tradeoff seems worth it.


One of the downsides is that implementations like SBCL have a deep integration and need things like a well performing GC implementation - to get this running on specialized game hardware is challenging. The article describes that. Getting over the hurdle of the low-level integration is difficult. The reward comes, when one gets to the point, where the rapid incremental development cycles of Common Lisp, even with connected devices, kicks in.

For the old historic Naughty Dog use case, it was a development system written in Common Lisp on an SGI and a C++ runtime with low-level Scheme code on the Playstation.

> Common Lisp also appears to be a much faster language than I would have blindly assumed.

There are two modes:

1) fast optimized code which allows for some low-level stuff to stay with Common Lisp

2) unoptimized, but natively compiled code, which enables safe (-> the runtime does not crash) interactive and incremental development -> this mode is where much of the software can run nowadays and which is still "fast enough" for many use cases


Except for occasionally using a small embedded Scheme in C++ when I worked at Angel Studios, I haven’t much experience using Lisp languages for games.

That said I have a question: is it a common pattern when using Lisp languages for games to use a flyweight object reuse pattern? This would minimize the need for GC.


If that's your main downside, that's pretty good, since clunkiness is in many ways fixable. Personally with standard CL I like to use property lists with keywords, so a "map literal" is just (list :a 3 :b 'other). It's fine when the map is small. The getter is just getf, setting is the usual setf around the getter. There's a cute way to loop by #'cddr for a key-and-value loop, though Alexandria (a very common utility library) has some useful utils for looping/removing/converting plists as well.

If typing out "(list ...)" is annoying, it's a few lines of code to let you type {:a 3 :b 4} instead, like Clojure. And the result of that can be a plist, or a hash table, or again like Clojure one of the handful of immutable map structures available. You can also easily make the native hash tables print themselves out with the curly bracket syntax.

(On the speed front, you might be amused by https://renato.athaydes.com/posts/how-to-write-slow-rust-cod... But separately, when you want to speed up Lisp (with SBCL) even more than default, it's rather fun to be able to run disassemble on your function and see what it's doing at the assembly level, and turn up optimization hints and have the compiler start telling you (even on the individual function level) about where it has to use e.g. generic addition instead of a faster assembly instruction because it can't prove type info and you'll have to tell it/fix your code. It can tell you about dead code it removed. You can define stack-allocation if needed. Simple benchmarking that also includes processor cycles and memory allocated is available immediately with the built-in time macro...)


The cost of a macro is not measured in lines of code. It's measured in things like adoption, clarity, and debuggability.


Things have costs, what's your underlying point? That one shouldn't create such a macro, even if it's a one-liner, because of unquantified costs or concerns...?

Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.

I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html

What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)

A basically identical version at the top most level is here https://github.com/mikelevins/folio2/blob/master/src/maps-sy... that turns the map into an fset immutable map instead, minor changes would let you avoid needing to use folio2's "as" function.


Please try to respond to my argument without 1) straw-manning it, 2) or reading a bunch into it that isn't there.

You made a point about the macro only costing a few lines of code. That is not a useful way to look at macros, as I can attest having written any number of short macros that I in retrospect probably shouldn't have written, and one or two ill-conceived attempts at DSLs.

Sometimes fewer lines of code is not better. Code golfing is not, in and of itself, a worthy engineering goal. The most important aims to abstraction are clarity and facility, and if you do not keep those in mind as you're shoving things into macros and subroutines and code-sharing between different parts of the codebase that should not be coupled, you are only going to lead you and your teammates to grief.

Things have costs. Recognize what the costs are. Use macros judiciously.


I started with my two questions not to strawman, but to find out if there was some underlying point or argument you had in mind that prompted you to make such a short reply in the first place. All I could read in it was not an argument, but a high level assertion, and not any sort of call to action. That's fine, I normally would have ignored it, but I felt like riffing on my disagreement with that assertion. To reiterate, I think you can reasonably measure cost through lines of code, even if that shouldn't be the only or primary metric, and I provided some outside-my-experience justifications, including one that suggests that an easy to measure metric like lines of code correlates with notoriously harder to measure metrics like the three things you stated. (If cost is to be measured by clarity -- how do you even measure clarity? Halstead provides one method, it's not the only one, but if we're going to use the word "measure", I prefer concrete and independently repeatable ways to get the same measurement value. Sometimes the measurement is just a senior person on a team saying something is unclear, often if you get another senior's opinion they'll say the same thing, but it'd be nice if we could do better.)

Now you've expanded yourself, thanks. I mostly agree. Quibble around size is "not a useful way" -- a larger macro is more likely to be more complex, difficult to understand, buggy, and harder to maintain, so it better be enabling a correspondingly large amount of utility. But it doesn't necessarily have to be complex, it could just be large but wrapping a lot of trivial boilerplate. DSL-enabling macros are often large but I don't think they justify themselves much of the time. And I've also regretted some one-line macros. Length can't be the only thing to look at, but it has a place. I'd much rather be on the hook for dealing with a short macro than a large one. Independent of size, I rather dislike how macros in general can break interactive development. What's true for macros is that they're not something to spray around willy-nilly, it's a lot less true to say the same about functions.

If you asked, I don't think I'd have answered that those two things are the most important aims to abstraction, but they're quite important for sure, and as you say the same problems can come with ill-made subroutines, not just macros. I agree overall with your last two paragraphs, and the call to action about recognizing costs and using macros judiciously. (Of course newbies will ask "how to be judicious?" but that's another discussion.)


> simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453

That's not implementing a literal (an object that can be read), but a short hand notation for constructor code. The idea of a literal is that it is an object created at read-time and not at runtime.

In Common Lisp every literal notation returns an object, when read -> at read-time. The {} example does not, because the read macro creates code and not a literal object of type hash-table. The code then needs to be executed to create an object -> which then happens at runtime.

The ANSI CL glossary says:

https://www.lispworks.com/documentation/HyperSpec/Body/26_gl...

> literal adj. (of an object) referenced directly in a program rather than being computed by the program; that is, appearing as data in a quote form, or, if the object is a self-evaluating object, appearing as unquoted data. ``In the form (cons "one" '("two")), the expressions "one", ("two"), and "two" are literal objects.''

    CL-USER 4 > (read-from-string "1")
    1
    1

    CL-USER 5 > (read-from-string "(1 2 3)")   ; -> which needs quoting in code, since the list itself doubles in Lisp as an operator call
    (1 2 3)
    7

    CL-USER 6 > (read-from-string "1/2")
    1/2
    3

    CL-USER 7 > (read-from-string "\"123\"")
    "123"
    5

    CL-USER 8 > (read-from-string "#(1 2 3)")
    #(1 2 3)
    8
But the {} notation is not describing a literal, it creates code, when read, not an object of type hash-table.

    CL-USER 9 > (read-from-string "{:foo bar}")
    (LET ((HASH (MAKE-HASH-TABLE))) (SET-HASH-VALUES HASH (QUOTE (:FOO BAR))) HASH)
    10
This also means that (quote {:a 1}) generates a list and not a hash-table when evaluated. A literal can be quoted. The QUOTE operator prevents the object from being evaluated.

    CL-USER 13 > (quote {:a 1}) 
    (LET ((HASH (MAKE-HASH-TABLE))) (SET-HASH-VALUES HASH (QUOTE (:A 1))) HASH)

    CL-USER 14 > '(defun foo () "ab cd")
    (DEFUN FOO NIL "ab cd")
In above example the string is a literal object in the code.

    CL-USER 15 > '(defun foo () {:foo bar})
    (DEFUN FOO NIL (LET ((HASH (MAKE-HASH-TABLE))) (SET-HASH-VALUES HASH (QUOTE (:FOO BAR))) HASH))
In above example there is no hash-table embedded in the code. Instead each call to FOO will create a fresh new hash-table at runtime. That's not the meaning of a literal in Common Lisp.


Thanks for the clarification on the meaning of "literal" in Common Lisp, I'll try to keep that in mind in the future. My meaning was more in the sense of literals being some textual equivalent representation for a value. Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant. For example in Python, one could write:

    a = list()
    a.append(1)
    a.append(2)
    a.append(1+3)
You can call repr(a) to get the canonical string representation of the object. This is "[1, 2, 4]". Python's doc on repr says that for many object types, including most builtins, eval(repr(obj)) == obj. Indeed eval("[1, 2, 4]") == a. But what's more, Python supports a "literal" syntax, where you can type in source code, instead of those 4 lines:

    b = [1, 2, 1+3]
And b == a, despite this source not being exactly equal at the string-diff level to the repr() of either a or b. The fact that there was some computation of 1+3 that took place at some point, or in a's case that there were a few method calls, is irrelevant to the fact that the final (runtime) value of both a and b is [1, 2, 4]. That little bit of computation of the element is usually expected in other languages that have this sort of way to specify structured values, too, Lisp's behavior trips up newcomers (and Clojure's as well for simple lists, but not for vectors or maps).

Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?


> Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant.

Actually it is relevant: is the object mutable? Are new objects created? What optimizations can a compiler do? Is it an object which is a part of the source code?

If we allow [1, 2, (+ 1 a)] in a function as a list notation, then we have two choices:

1) every invocation of [1, 2, (+ 1 a)] returns a new list.

2) every invocation of [1, 2, (+ 1 a)] returns a single list object, but modifies the last slot of the list. -> then the list needs to be mutable.

    (defun foo (a)
      [1, 2, (+ 1 a)])
Common Lisp in general assumes that in

    (defun foo (a)
     '(1 2 3))
it is undefined what exact effects the attempts to modify the quoted list (1 2 3) has. Additionally the elements are not evaluated. We have to assume that the quoted list (1 2 3) is a literal constant.

Thus FOO

* returns ONE object. It does not cons new lists at runtime.

* modifying the list may be not possible. A compiler might allocate such an object in a read-only memory segment (that would be a rate feature -> but it might happen on architectures like iOS where machine code is by default not mutable).

* attempts to modify the list may be detected.

SBCL:

    * (let ((a '(1 2 3))) (setf (car a) 4) a)
    ; in: LET ((A '(1 2 3)))
    ;     (SETF (CAR A) 4)
    ; 
    ; caught WARNING:
    ;   Destructive function SB-KERNEL:%RPLACA called on constant data: (1 2 3)
    ;   See also:
    ;     The ANSI Standard, Special Operator QUOTE
    ;     The ANSI Standard, Section 3.7.1
    ; 
    ; compilation unit finished
    ;   caught 1 WARNING condition
    (4 2 3)
* attempts to modify literal constants may modify coalesced lists

for example

    (defun foo ()
      (let ((a '(1 2 3))
            (b '(1 2 3)))
        (setf (car a) 10)
        (eql (car a) (car b))))
In above function, a file compiler might detect that similar lists are used and allocate only one object for both variables.

The value of (foo) can be T, NIL, a warning might be signalled or an error might be detected.

So Common Lisp really pushes the idea that in source code these literals should be treated as immutable constant objects, which are a part of the source code.

Even for structures: (defun bar () #S(PERSON :NAME "Joe" :AGE a)) -> A is not evaluated, BAR returns always the same object.

> Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?

Actually I was under the impression that "literal" in a programming language often means "constant object".

See for example string literals in C:

https://wiki.sei.cmu.edu/confluence/display/c/STR30-C.+Do+no...

Though it's not surprising that language may assume different, more dynamic, semantics for compound objects like lists, vectors, hash tables or OOP objects. Especially for languages which are focused more on developer convenience, than on compiler optimizations. Common Lisp there does not provide an object notation with default component evaluation, but assumes that one uses functions for object creation in this case.


Yeah, again I meant irrelevant to those who share such a broader ("dynamic" is a fun turn of phrase) definition of "literal" as I was using, it's very relevant to CL. I thought of mentioning the CL undefined behavior around modification you brought up explicitly in the first comment as yet another reason I try to avoid using #() and quoted lists, but it seemed like too much of an aside in an already long aside. ;) But while in aside-mode, this behavior I really think is quite a bad kludge of the language, and possibly the best thing Clojure got right was its insistence on non-place-oriented values. But it is what it is.

Bringing up C is useful because I know a similar "literal" syntax has existed since C99 for structs, and is one of the footguns available to bring up if people start forgetting that C is not a subset of C++. Looks like they call it "compound literals": https://en.cppreference.com/w/c/language/compound_literal (And of course you can type expressions like y=1+4 that result in the struct having y=5.) And it also notes about possible string literal sharing. One of the best things Java got right was making strings immutable...


> The ability to evaluate code interactively without recompilation

SBCL and other implementations compile code to machine code then execute it. That is to say, when a form is submitted to the REPL, the form is not interpreted, but first compiled then executed. The reason execution finishes quickly is because compilation finishes quickly.

There are some implementations, like CCL, with a special interpreter mode exclusively for REPL-usage.[1] However, at least SBCL and ECL will compile code, not interpret.

[1] https://github.com/Clozure/ccl/blob/v1.13/level-1/l1-readloo...


I specifically talk about the fast evaluator for SBCL. But even without that contrib, SBCL does have another evaluator as well that's used in very specific circumstances.


I think a lot of this is confusion between online versus batch compilation? Most of us have only ever seen/used batch compilation. To that end, many people assume that JIT in an interpreter is how online compilation is done.

I probably am more guilty of that than I should be.


> online compilation

? incremental compilation


I confess I wasn't positive what the correct term would be. "Online" is common for some uses of it. And I "knew" that what we call compilation for most programs used to be called "batch compilation." Searching the term was obnoxious, though, such that I gave up. :(


Do either CCL or SBCL have any kind of partial evaluation or tracing compilation?


> the form is not interpreted, but first compiled then executed

That's TempleOS technology right there.


Other way around.


There are 1980's papers about Lisp compilers competing with Fortran compilers, unfortunately with the AI Winter, and the high costs of such systems, people lost sight of it.


Well, I imagine at the time they had some LISP implementations that were very well tuned for specific high end machines, which essentially duplicated Fortran functionality. This is difficult to do for general purpose Lisps like SBCL. It was also probably very expensive.


What is difficult is having Apple, Google, IBM, Microsoft, Intel, NVidia, AMD,.... compiler teams budget.


As well as high end machines built for Lisp.


There are some libraries that make maps and the like usable with a cleaner syntax. You too could make some macros of your own for the same purpose, if syntax is the concern


https://mmabetsharp.com

A website to help make informed bets on the UFC. It presents a lot of relevant data if you're serious about MMA sports betting.

I made this during the pandemic and tried to promote it through Reddit and Twitter, but it mostly fell flat and ran out of steam. I only scratched the surface of what I intended here. The data on the site is a bit outdated since neither I nor anyone has used it in a while.

A bit bummed that it never caught on within the MMA capping community, but I've felt I could always come back to it if the potential expressed itself.


This is rad. I'd love to see it updated with current data. I've tried to bet on MMA with mixed results, but ultimately tried to find a data driven approach to place wagers. My approach would net me tons of open browser tabs trying to track down stats about each fighter :(

I really like what you've done.


Thanks I really appreciate that. MMA (UFC specifically) is still wide open to advantage betting because lines are often highly narrative-based. Data itself WRT to MMA is often misleading though for so many reasons (e.g., low quality of opponents presenting a skewed perception of fighter ability). However, looking at specific things is often quite useful.

The submission charts section of the site was particularly useful to me. You find that some fighters are skilled at one specific submission and that sometimes their opponents are highly vulnerable to that particular submission.

A big issue I found when trying to market the site was that, the majority of bettors have no interest in looking at data, reviewing fights, or putting any time towards making informed bets. Most people prefer to place bets naively primarily for the sake of entertainment.


I think an interesting observation here is that the very thing that makes it difficult to get adoption of this product/data is the thing that makes the product/data valuable. Without the naive/narrative/entertainment bettors it would be more difficult to find an edge.

Presumably there is a very small, niche population that would find this information very valuable. Funnily enough, those that do find it valuable are unlikely to share it - if they're acting in their own self interest they don't want other sophisticated bettors to enter the market and inadvertently help the bookies set the line more accurately.


I've done (by my own definition) quite a lot to scrape and analyze NCAAF and NFL games in order to identify trends for sports betting. I'm somewhere between your average sports bettor versus a data scientist, probably closer to the former.

Let me know if you'd ever like to connect. It would be cool to expand my network of people in this area.


I'm far from a data scientist. I'm just someone that has an interest in building UIs and saw a potential to look at data in a way I hadn't seen before. I'm not actively working on this right now, but if you'd like to drop a line for whatever reason, I still check my Twitter for messages when they come up.

https://twitter.com/mmabetsharp


Could see this applying to the upcoming market in chess betting. Good luck!


Where do you collect the data from? Accurate sports data is generally hard to come by unless you pay for it.


It's freely available on Wikipedia, UFCstats, sherdog, and tapology. You have to scrape it, but you can also find large collections of structured data if you look around.


I was a really interested in BV's work for a while, but some of the major roadblocks you quickly run into once you start thinking about how to implement his ideas are that:

1. Whenever you zoom out of the code-level, you lose granularity and thus flexibility and power.

2. In order to gain expressiveness, you can constrain the domain, but again you lose flexibility to implement what you want and how you want.

3. It's difficult to avoid losing the ability to express things in general ways whenever you switch to visual or physical representation of code.

4. A lot of the ideas you might have end up being more simply represented by code, and more easily manipulated by way of text and keyboard.

5. A lot of things end up just being superficial wrappers over code. Superficial in the sense that they only hide surface-level complexity (e.g., reducing the visual volume of large code blocks).

6. Catering interfaces to novices often hampers experts.

There seem to be a lot of trade-offs. I don't know if these are laws per se, but they seem difficult to break.

What interests me particularly are new ways to create general purpose programs using methods that more efficient and more intuitive, but it seems like a really difficult task bound by near-inescapable trade-offs.


I cannot for the life of me understand why Tailwind gets so much exposure and generates so much discussion on HN. It is only barely interesting. As far as I know, all it does is wrap CSS rules into a form that can be used in the style of inlined CSS.

It's not new, interesting, difficult, or complex. I don't have much opinion on whether it's actually good or not (I happily use modular CSS), but the subject keeps popping up over and over.


Next.js recommends delivering static assets by default and is geared towards that.

> We recommend using Static Generation (with and without data) whenever possible because your page can be built once and served by CDN, which makes it much faster than having a server render the page on every request.

If you need dynamic content, they recommend server-side rendering, and lastly client-side rendering only if the page is unaffected by SEO and requires a lot of in-page updates.

https://nextjs.org/learn/basics/data-fetching/two-forms


Shopify has really strict guidelines and review processing for official themes. If you can get accepted it's probably lucrative but I wouldn't put most my eggs in that basket. Whether or not you're accepted seems to be somewhat arbitrary or at least too subjective.

You can release a theme on themeforest without all the hubub, but you're competing with a lot of developers, and there's tons of mediocre stuff so you'll have to stand out in some way.


So far what I've been doing is keeping my webpack config as simple as possible (< 40 lines) and just relying on default behavior. I know I'm losing out by not bothering with some of the more intricate configuration properties like caching, etc.

But one of my big pet peeves is when you clone a project and the installation says all you need to do is run make install or npm i, but in reality requires 20 google searches and an hour of banging your head to get the project running and even then you end up with 50 cryptic warning messages in your terminal so you don't even know if you did it correctly.

With a very simple webpack config you might lose out on optimizations but at least you can get just about anyone running a project locally and if things go awry you can generally pinpoint the problem to a specific line of configuration.


I've found that a slightly longer (~100 lines) webpack config allows me to use all the caching and fancy tricks I want and is still just as debuggable. I think the main thing is setting it up yourself so you know what all the plugins/settings do.


The key is don’t abstract it. No function builders, one big object with inline conditional spreads.


Caching was a major dissapointment for me. DllPlugin is a nightmare to setup and other caching plugins I tried don't really speed things up. What eventually worked for me is HardSourceWebpackPlugin [1] - two lines added and went from 2min to 15s (with a tradeoff of a slightly longer initial build).

[1] - https://github.com/mzgoddard/hard-source-webpack-plugin


Not sure on the specifics, but my understanding is that the major improvements in Webpack 5 are related to caching.


I got the same message. Still, having faster builds with one additional plugin without configuration needs on projects locked on 4 or waiting out the transition is a plus in my book :).


I just use laravel mix, as a wrapper around webpack, and then I can extend that if I need to for certain things, like maybe aliases, or something.


Why not just use npm at that point?


Colleges are one of the few structures where the normal relationship between a provider and a customer is inverted such that it would appear that the person paying for the service is under the obligation to perform for the person receiving the payment.

To give an example. The student is the customer, paying 30k a semester, yet if they go into the bursar to get some record corrected they are treated as if they are the hindrance in the system by way of long lines and apathetic clerks.

Another example: a student comes into class 10 minutes late each day because the class overlaps with his job. The professor or TA reprimands the student for essentially putting their needs first, despite the fact that the student is almost directly paying the professor's salary.

Another example: student falls asleep in class. Professor stops the class to embarrass the student.

In all these cases it's as if the professor is paying the student to be apart of the school and the obligation of performance is on the student.

In reality, the institutional responsibility of providing value should be on the professor and university. The individual responsibility of performance should be on the student. And yet the roles are the same as if a college were a public school, as if the professor and the school does not have a strong and direct responsibility to their customers/patrons. That it were the student's responsibility to validate the school and teacher.

Schools really are a "unique" form of business. The virus has uncovered what a lot of us knew all along; mainly that they don't provide the value even remotely close to what their price tag suggests. They perpetuate because their customers are essentially still children, people being routed in society by society, and have not had the experience of having any real, consequential responsibility in the world. People who don't need to perform any cost-benefit analysis because the decision is already made for them and the cost is usually hoisted onto someone else. Otherwise, few sane and independent people would willingly enter into a contract where you pay so much to be treated like an agent-less underling.

Like a lot of large organizations, universities exist so that they continue to exist. It's foolish to think that they have the interests of individuals in mind.

Or so at least that has been my regretful experience excitedly waiting to be spit out of high school to learn the depths of programming in a place that sincerely respects learning, and instead spending 4 years in yet another purgatory to the real world.

At least I never have to hear the words again is this going to be on the final?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: