Hacker News new | past | comments | ask | show | jobs | submit login
Why would anyone need JavaScript generator functions? (jrsinclair.com)
247 points by lewisjoe on Nov 7, 2022 | hide | past | favorite | 177 comments



You can use One Weird Trick with generator functions to make your code "generic" over synchronicity. I use this technique to avoid needing to implement both sync and async versions of some functions in my quickjs-emscripten library.

The great part about this technique as a library author is that unlike choosing to use a Promise return type, this technique is invisible in my public API. I can write a function like `export function coolAlgorithm(getData: (request: I) => O | Promise<O>): R | Promise<R>`, and we get automatic performance improvement if the caller's `getData` function happens to return synchronously, without mystery generator stuff showing up in the function signature.

Helper to make a function that can be either sync or async: https://github.com/justjake/quickjs-emscripten/blob/ff211447...

Uses: https://cs.github.com/justjake/quickjs-emscripten?q=yield*+l...


I used a very similar pattern in a web app, in which the user creates objects by clicking in different places. There is a state built by each click, and it is convenient to just go back one or several steps behind sometimes. So you have something like

  const input1 = yield
  // arbitrary side effects, UI updates etc.
  const input2 = yield
  // ...
Each time the user makes an input, it is accumulated in a list, and sent back to the generator function. When the user decides to cancel the last step, a generator is recreated and rerun, with each yield sending back the user input that was stored in the list, except the last one. This requires writing the generator function in a particular way (you have to avoid setting "external" state), but it works and is more flexible than automata, I think.


Babel does this, using a library called gensync. I have a writeup which goes into more detail: https://writing.bakkot.com/gensync

gensync instead creates two functions, one sync and one async, which is probably a more familiar API for consumers of your code.


This seems a lot more complicated than something like this:

    if (!(value instanceof Promise)) {
        value = Promise.resolve(value)
    }
And then you write the rest of your code as if it is async.


It's very easy to make sync code async. Just add an `async` to the function declaration. You don't need the `instanceof` check or `Promise.resolve` - you can `await` any type of value; it'll get unwrapped as a Promise if it has a `then` method, or otherwise just give you back the value. See [MDN for await].

If that's okay for your code, then go for it. However, downgrading a sync function to be async has serious performance implications - turning a trivial sync function call in a tight loop into an awaited async function call will make your loop 90%+ slower [benchmark]. You're also going to allocate much more memory for Promise objects. The code I linked above is from a library implementing a sandboxed Javascript VM. If we forced all calls into the guest sandbox, or from the guest sandbox back to host functions like `console.log`, to be async, the VM would be unusable for many applications.

[MDN for await]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[benchmark]: https://jsbench.me/y0la7auape/2


I see what you’re saying. For your bench mark code, it doesn’t look like you’re using a generator? How does the await compare to a generator? I only ask because the non-async example looks like there are a million ways the runtime could optimize it that might not apply in practice. (I’m on my phone now, otherwise I would try it).


OP's technique is useful for a consumer of a another function, to consume it in a synchronity-agnostic manner. In the implementation of the function consumer, you should find the usage of OP's technique.

In other words, this technique allows a library's implementation and interface to be synchronity-agnostic, but it does not say anything about the library user. If the library user likewise makes use of the OP technique, the library user code will remain synchronity-agnostic, otherwise it will be tied to be either synchronous or be asynchronous (parametrised to the synchronity).


Not as if. You’re making a synchronous call async. And so it keeps spreading through your code. Op’s trick is certainly more complicated but looks like a smart way to support async cases but still letting synchronous calls stay synchronous.


Hmm I see. So the value is in creating a kind of blocking promise. I can see how that might be useful in certain circumstances, but using blocking functionality isn't something you'd want to rely on in general.


No, the promise doesn't block, it just stays a promise. However, for synchronous calls, the Promise overhead is obviated.

In other words, the performance price of synchronous vs asynchronous calls is the price of a function call vs Promise implementation (i.e. event-loop machinery); the price of a function call, including the stack frame allocation, which is non-zero (recall the times that assembly programmers would dismiss languages like C as 'too-slow', having to allocate on function calls), versus pushing the Promise closure onto an event-loop, exiting the current event-loop, waiting for the next event-tick, popping off the next closure from the event stack, then creating the function call stack frame under a closure.

For inner-loops, the difference can be 20ms vs 20s.


I don't get it, the signature still returns `Type | Promise<Type>` how that promise doesn't spread through your code?

Still, the other comment about using await on plain values seems like a better option than parent comment


c-baby’s suggestion wraps a non-promise in a promise, so I don’t see how that’s still able to return a non-promise.

Awaiting plain values can only be done inside an async function, which means it returns a promise, which means you have to wait for the event loop to get the value out of there.

Generally I’m not aware of any other (reasonably ergonomic) way to write a single code path that can work with both sync and async input without itself always giving async output.


The problem with this is that it breaks some tooling (async stack traces in Chrome dev tools for example).


Generators are a fairly natural way to write recursive descent parsers. The alternatives are either to parse everything and return it in one big structure (which can be awkward for large documents) or to supply Visitors (which works, but often doesn't match your mental model).

It's nice to be able to write "a lexer just yields a stream of tokens" and "expr ::= term op expr" as:

  function* expr() { 
    x = term();
    operator = op();
    y = expr();
    yield apply(operator, x, y);
  }
Backtracking takes a little setup, but overall it's a very elegant way to write code.

Not many people really need to write parsers, but even if you're using a black box from somebody else, it can be fairly elegant to use if it supplies it as an AST generator or result generator.


+1 There is a lot of code that is easier to write in "push style" where the code runs blocking and "pushes" results (either to a queue, a callback or a result array that is returned at the end) but it is better for the consumer if it is "pull style" where they can process items as they receive them and can do streaming/incremental parsing. Language supported generation functions/courtesies make it very easy to write in a "push style" while being possible to call in the "pull style".


An example of what you’re describing in a different domain, is how we are using channels between tasks in FreeRTOS in our embedded firmware. Other tasks just push commands or data into a channel that the consumer can pull at is leisure (modulo the size of the channel of course, we have fun constraints!)


In my own lexer in JS, I considered using a generator but instead used a class with an index property. In your experience, what advantages does a generator have over that approach?


The big advantage (and possibly disadvantage) is that the class properties (i.e. the internal state of the iterator/lexer) are implicit, which means you don't need to worry about setting them and restoring them each time. For example, you could write something like this:

    let idx = 0;
    while (true) {
        switch input[idx] {
            case '"': {
                const [token, length] = lexString(input.slice(idx));
                 yield token;
                 idx += length;
            case '...': {}
        }
    }
Here, we don't need to worry about saving the state of the index, because it's implicitly "saved" at the yield, and we know we'll end up back on the line after the yield once control is returned to the generator.

In this case, where the only relevant state is the index, that's not that much of an advantage, but it could be more of an advantage if you were writing a generator function with more internal state - for example, a parser using a state machine to determine which tokens are valid next.


Not just for parsers, you can walk any tree structure with a generator. For example turning a binary tree into a sorted iterable.


A bunch of Notion's text editing algorithms need to walk up and down our tree of blocks in a particular order. We implement those walks as generators.


That's really cool. Do you have any further more detailed examples of writing such a parser using generators? It's very coincidental because I've been learning about this topic recently.


Not sure about the timtam example. Couldn't this:

  const flow = (...fns) => x0 => fns.reduce(
    (x, f) => f(x),
    x0
  );

  function slamTimTamsUsingFlow(timtams) {
    return timtams
        .slice(0, MAX_TIMTAMS)
        .map(flow(
            biteArbitraryCorner,
            biteOppositeCorner,
            insertInBeverage,
            drawLiquid,
            insertIntoMouth));
  }
Just be written as:

  const slamTimTams = timtams => 
     timtams
        .slice(0, MAX_TIMTAMS)
        .map(timtam => timtam.biteArbitraryCorner()
            .biteOppositeCorner()
            .insertInBeverage()
            .drawLiquid()
            .insertIntoMouth());
Feels much more concise and readable than the flow example


IMO even more readable if we ditch the functional looking stuff:

  function slamTimTam(timtam) {
    timtam.biteArbitraryCorner();
    timtam.biteOppositeCorner();
    timtam.insertInBeverage();
    timtam.drawLiquid();
    timtam.insertIntoMouth();
  }

  function slamTimTams(timtams) {
    const slammedTimTams = timtams.slice(0, MAX_TIMTAMS);

    for (const timtam of slammedTimTams) {
      slamTimTam(timtam);
    }

    return slammedTimTams;
  }


> functional looking stuff

That doesn't look "functional" at all. The "slamTimTam" function is modifying an object it's been given. Modifying an object that you've taken as an argument should almost never happen anywhere in any codebase, much less a functional one.

Although it's not totally your fault: TFA seems completely confused about what "map" actually does, itself. I can't really tell whether they know what immutability is or not.

In fact one of my biggest pet peeves is "fluent" style code that modifies the object it's being called on. D3 in JavaScript is a perfect example of this abominable perversion of that "functional" style.


> That doesn't look "functional" at all.

Yes... because he ditched it.

And further - there isn't really anything wrong with modifying an object. Functional programs are nice because they remove lots of extraneous state, but there are many situations where you still have to track state somewhere - doing it in a contained object that is changed is fine.

The issue (particularly in JS) is when you allow assignment by reference value, and not by copy - in that case modifying an object can be dangerous because other parts of the code may be holding onto the same reference, and you've accidentally introduced subtle shared state.

Ideally - all users would be aware, and would make sure that assignment only happens with a call to something like copy()/clone() but the language doesn't really have the tooling to enforce this (unlike many other languages ex: c++/Rust/others)


Yes. Furthermore I am not really even sure what the motivation was in the first place. I have found generators to be most useful in my career in exactly one circumstance--when I want to abstract some complicated batching, filtering, or exit condition logic out of my main processing loop. As a specific example, generators are great for taking an input stream and returning the first 10000 packets matching '^foo.*' in batches of 10.

In the article, it seems like we are just looping over a set of 11 cookies and doing exactly the same thing to all of them. Not sure what I'm missing. The stuff after that regarding infinite sequences is pretty good though!


> In the article, it seems like we are just looping over a set of 11 cookies and doing exactly the same thing to all of them. Not sure what I'm missing.

You're missing that we're not just looping over a set of 11 cookies and doing exactly the same thing to all of them. We're streaming over a lazy sequence of cookies and doing the same thing to them until we stop (in the example when they hit 5 cookies eaten instead of eating them all and, presumably, becoming ill).

Generators act like streams, and are useful in any place you might see this pattern:

  value, state = next_value(state)
  -- alternatively if we can mutate the state directly --
  value = next_value(state)
But they permit you a greater deal of flexibility about the way "state" mutates than a typical state structure might (or at least greater ease of use).


Do js generators resume from a yield point? Reading this thread I see no mentions of the fact that generators usually(?) resume from whatever branching-looping state they were at yield, which is hard to simulate in general. A simple for(i){yield} - yes, just store an index somewhere and simulate to “resume” at it in your stream-like closure. A complex stateful algorithm with lots of cached lexical context - well, manageable with a corresponding data structure, but more expensive in terms of mapping it to a state table.

The whole point of a generator is its stack frame and its “instruction pointer”. You can implement a state machine right in regular code via cheap primitives. Otherwise the difference is just syntactical, afaiu.


I mean, you could look at the examples in the article and see that, yes, they resume from a yield point. But here's a simple illustrative example:

  function* bar() {
    yield 3;
    console.log("Resumed after the 3")
    yield 4;
    console.log("Resumed after the 4")
  }
  function* foo() {
    yield 1;
    console.log("Resumed after the 1")
    yield 2;
    console.log("Resumed after the 2")
    yield* bar();
  }

  for(let n of foo()) {
    console.log(n);
  }
Output:

  1
  Resumed after the 1
  2
  Resumed after the 2
  3
  Resumed after the 3
  4
  Resumed after the 4


It is a weird example. But that transform does not work generally. Consider if you want to stop when insertIntoMouth() returns a particular value. Or if there is some filter() step, so that you don't know that `n` input items will map to `n` output items.

The difference here is push vs pull (also called eager vs lazy). It is often easier to write pull computations that only do the work that is actually needed.


Yup. I found this article difficult to read.


The functional approach should tend to be more composable though.


Redux-sagas[0] makes great use of generators. I found it a fantastic tool, if you're already in the redux ecosystem, and have an application that sprawls enough to benefit. It's great when you have to manage multiple process lifecycles. A single "saga" can summaries an entire service lifespan in just a few lines of code. Good candidates include a socket connection, a user session, analytics etc. I desperately want to use it with my current project (streaming video) but our code's too mature to introduce such an architectural change.

The downsides are:

- a quirky syntax that needs learning, and is of the "loose with semantics" style - like Rails-eque REST's play with HTTP methods

- it's hard to test (despite what the documentation claims). It's highly declarative, and such code seems hard to test.

[0] http://redux-saga.js.org/


> it's hard to test (despite what the documentation claims). It's highly declarative, and such code seems hard to test.

redux-saga member here. If we concede that mocks are inherently difficult to get right and maintain, often requiring libraries or testing frameworks to do a lot of the heavy lifting, then I would argue testing sagas is a breeze in comparison.

The real magic behind redux-saga and its use of generators is the end-user functions do not have side-effects at all, they merely describe what those side-effects ought to look like. The library activates the side-effects in its runtime, the end-user just yields to json objects.

So in order to test sagas, all you need is to run the generator and make sure it yields the things you expect it to yield.

https://bower.sh/simplify-testing-async-io-javascript

Having said all that, because the react world has heavily shifted towards hooks -- which explicitly make the view layer impure and hard to unit test -- a bunch of tooling has come out to prefer integration testing. In particular, testing user flows and evaluating what the user sees. In this paradigm, testing individual sagas or even react components is not nearly as valuable.


> redux-saga member here.

Thanks for responding!

> So in order to test sagas, all you need is to run the generator and make sure it yields the things you expect it to yield.

This was the crux of what I found frustrating, actually. A simple saga with one or two steps was ok. Longer ones were painful. My test scripts ended up with these patterns:

- looking 80% like my application code

- needing to mock almost every step (every yield's output and input) - which bifurcates quickly if your sagas branch

- requiring digging into the emitted call/take/etc payloads (which seems the opposite of what you want when using a utility - I'd prefer to "let the utility just do its job" in the tests)

- brittle tests - when I changed a saga, I usually needed to change the test, rather than being able to leave the tests alone to prove my refactor works.

None of this put me off using redux-sagas. The benefits for the application code were worth it, IMO.


Hey thanks for the thoughtful response.

I agree with everything you mentioned here. I'd love to continue to chat with you about how to make testing sagas better.

If you'd like, it would be great if we could move this convo to https://github.com/redux-saga/redux-saga/discussions/2337


First of all, sagas are an amazing concept that I still cherish even if I don’t use them anymore (hype not there?).

Regarding the hooks comment I’m not sure I follow. Surely components are also idempotent (if you skip useEffect) and should be replayable - that’s at least how I think react refresh can manage to keep state intact during file changes in development.

Anyway I still very much value the concepts behind sagas so thanks for great work.


> Regarding the hooks comment I’m not sure I follow. Surely components are also idempotent (if you skip useEffect) and should be replayable - that’s at least how I think react refresh can manage to keep state intact during file changes in development.

We used to have the concept of smart and dumb components, with dumb components we could test if we provide X props we could determine Y result and write tests expecting Y to be true. Now with the advent of hooks, side-effects are much more likely to be in all of our components. This makes react components impure and difficult to test as individual units. This movement started primarily with the shift to hooks.

> Anyway I still very much value the concepts behind sagas so thanks for great work.

Thanks!


I had to rescue a saga based project that went badly off the rails and it was some of the hardest code I’ve ever had to debug in any language. Code flow was very difficult to reason about and forget about trying to use stack traces.


Yeah I had to rescue a buggy React Native app where the devs put every backend call into sagas.

It was a pain to debug and I’ve since almost entirely ripped sagas out. It’ll be a good day when I can delete it as a dependency.


Be careful. You will summon acemarke with this comment.


HAH! and ironically I read this literally 10 minutes after you posted it :)

dons maintainer hat

<ObligatoryResponse>

We've been generally recommending against use of sagas for years - they're a power tool, and very few apps need that. They're also a bad fit for basic data fetching.

Today, Redux Toolkit's "RTK Query" API solves the data fetching and caching use case, and the RTK "listener" middleware solves the reactive logic use case with a simpler API, smaller bundle size, and better TS support.

Resources:

- https://redux.js.org/tutorials/essentials/part-7-rtk-query-b...

- https://redux-toolkit.js.org/rtk-query/overview

- https://redux-toolkit.js.org/api/createListenerMiddleware

- https://blog.isquaredsoftware.com/2022/06/presentations-mode...

</ObligatoryResponse>


I'll use your nice response for an actual question/remark, gotcha! =)

Recently I had a look at the kubeshop-dashboard repo[1] and their use of the RTK Query API[2]. When I write the boilerplate for any SPA nowadays, I usually like to merge any fetching logic with the lib-specific notification/toast-methods, in order to render something to the user about any reached warning- or error-timeouts for each ongoing fetch by default. Meaning:

- every new fetch would start a timer

- after 10secs a warning-notification is shown "a fetch takes longer than expected..."

- and after 30secs the AbortController signals the cancelling of the ongoing fetch and an error-notification is shown "fetch took to long. click to try again."

The implementation of react-query, its "hook-yfied" nature, makes it super easy to wrap it and merge it with the component-lib to create such a thing. I just need to wrap its provided hooks (useQuery, useMutation) with "hook-creators" (I usually call them createQueryHook and createMutationHook) and don't need to dive into any of its implementation specific details. But createApi, as provided by RTK Query API, makes this quite a bit harder, or so it seems to me at least. How would you wrap createApi to provide such a functionality for every fetch by default?

[1]: https://github.com/kubeshop/testkube-dashboard

[2]: https://github.com/kubeshop/testkube-dashboard/tree/main/src...


Hmm. Lenz ( @phryneas in most places) could speak more to some of this, but I don't think RTKQ really supports the idea of "canceling" a fetch in general. Can you give some specifics about what that means in your case?

As for the timers and toasts go, I can think of two possible approaches off the top of my head.

First, you could provide a custom `baseQuery` function [0] that wraps around the built-in `fetchBaseQuery`, does a `Promise.race()` or similar, and then triggers the toasts as needed.

Another could be to use the RTK "listener" middleware [1] to listen for the API's `/pending` actions being dispatched. Each pending action would kick off a listener instance that does similar timer logic, waits for the corresponding `/fulfilled` or `/rejected` action, and shows the toast if necessary .

If you could drop by the `#redux` channel in the Reactiflux Discord [2], or open up a discussion thread in the RTK repo, we could probably try to chat through use cases and offer some more specific suggestions.

[0] https://redux-toolkit.js.org/rtk-query/usage/customizing-que...

[1] https://redux-toolkit.js.org/api/createListenerMiddleware

[2] https://www.reactiflux.com


> Can you give some specifics about what that means in your case?

react-query has a default behavior to cancel a fetch when the component unmounts (AFAIK), eg. the user changes to another view and the data of the previous view aren't needed anymore. I prefer to only have those fetches pending which are actually needed and seem likely to succeed, as otherwise my SPAs would just add unnecessary load on the API gateway. I specifically had such a case when the backend team was in transition to a microservice architecture, hence the timeouts.

But thanks, will join the discord then after I created a repo to play around.


What's weird is that I still firmly believe that sagas is one of the sanest ways of organizing an application. I built a sort of boilerplate project that shows how I use it[1] but the TL;DR is that I can wrap all of my functionality into nice little sagas and manage the state very easily with lenses. Handling data fetching isn't too complicated either [2] but I'm also not doing any sort of fancy caching in this example.

[1]: https://github.com/MCluck90/foal-ts-monorepo/blob/main/app/c...

[2]: https://github.com/MCluck90/foal-ts-monorepo/blob/main/app/c...


I'm 1000% with you on this. If you're dealing with a bunch of operations that need to happen in a very specific order, there's really nothing else out there that comes close. I'm able to look at saga code I've written months (or years!) ago and figure out what's going on in a short amount of time without having to jump around.

I used sagas pretty heavily in an app I built to transfer data between Clockify and Toggl, which required that data be fetched/loaded into state in a very specific order[1]. You can't be sagas for clarity.

[1]: https://github.com/mikerourke/transfermyti.me/blob/main/src/...


Yeah, sagas can be useful - not saying they aren't.

But on the other hand, I've seen plenty of codebases and talked to lots of Redux users where sagas turned into an impenetrable spaghetti mess of actions and events going everywhere, and it was impossible to trace what was going on.

There's also a lot of additional boilerplate you need to write to use sagas. Redux's early reputation for "boilerplate" was deserved, and there's a lot of reasons why that happened - patterns shown in the docs, things like action creators and string constants, immutable updates with spread operators, etc. While they weren't _required_, sagas were definitely _a_ contributing factor to that reputation.

We've pushed to erase the "boilerplate" concerns and fix that reputation with Redux Toolkit, and so a part of that has been encouraging people to _not_ use sagas unless absolutely necessary. I wrote up a post a while back on reasons why we opted to focus on thunks instead of sagas in RTK [0], and the "Evolution of Async Logic" talk [1] (which I need to turn into a docs page) covers our recommendations today.

If sagas do work well for you, that's great! But we really do think they _aren't_ the right choice for most Redux apps and users.

[0] https://blog.isquaredsoftware.com/2020/02/blogged-answers-wh...

[1] https://blog.isquaredsoftware.com/2022/05/presentations-evol...


Agreed that sagas can turn into spaghetti and probably aren't a great choice for most Redux apps. Just like everything else in this industry, sometimes you should use stuff, sometimes you shouldn't, it depends. I did want to mention that I've been using Redux for over 6 years now and I really appreciate the improvements you and the rest of the contributors have made. Keep up the good work and thanks for being awesome!


The only off putting thing in redux-saga for me was the over use of verbs in the API. The `put, call, take, takeEvery, all`. That makes me somehow a little tense. I don't agree with the testing part though, We used helpers like `runSaga` method to get this done easily, also generator value can also be passed in using `generator.next(//value//)` if I remember correctly.


Async generators similarly open up different ways to program complex / multi step user interactions. Instead of creating “state machine” objects that mutate on every input, you can have async functions aka coroutines that iterate over user inputs, making the flow of an interaction much more explicit.


I'm a bit surprised that database query pagination isn't directly mentioned as one of the use cases. The (async) generator wraps the paginated calls to the DB and yields pages or individual documents / rows. It's about as vanilla-CRUD-API scenario as I can think of.


Agreed! We do this in Notion's public API library:

    for await (const block of iteratePaginatedAPI(notion.blocks.children.list, {
      block_id: parentBlockId,
    })) {
      // Do something with block.
    }
Implemented here: https://github.com/makenotion/notion-sdk-js/blob/90418939a90...


We've made every node stream async iterable, and we also support all the iterator helpers. If whatever you're using is a Node stream - this works and it exposes an async iterable stream :)


My favorite use case for generators (I was using C# at the time, but it applies equally well to any language with yield) was for implementing a turn-based game loop:

http://journal.stuffwithstuff.com/2008/11/17/using-an-iterat...



I initially liked the idea of generators, but after years of trying to find ways to apply them, I just haven't found a use case where they were more sensible than using existing logic and looping constructs. They could be useful where they would provide less overhead than constructing arrays for the same purpose, but that doesn't mean the same thing can't be achieved without either of those things. It's good in theory, but hasn't been useful to me in practice.


Imagine you have function that returns elements. In order to return each element, you need to do some time-consuming calculation. You don't know how many elements the users of your function will need. Some may need 13 items. Some the first 500. Others might be interested in knowing only the first item.

Let's say that your sequence has a maximum size of 1000. If you were to return an array, you'd need to construct the full array each time, even if the code that calls your function only needs 1 item.

Using a generator, you can write the code once, and it is performant across different use cases.


I could write this against a few other replies but I will write it here. Moreover, I don't think I'm going to say anything that hasn't already been covered by other people, but I am going to attempt to distil down the arguments.

- The benefits of generators are, in a large part, the benefits of using iterators

- What are the benefits of using iterators? As you say, one benefit is that calling `next` on an iterator performs a single unit of work getting the next value. This let's you avoid e.g. allocating intermediate arrays, let's you do infinite streams, etc. Compare that to calling `map` on a list...you have to consume the entire list.

- A second benefit of iterators is that of scope. When I call `next` on an iterator I get the next value in the scope of the caller. This is particularly useful in a language with function colouring, because use of the `next` value can match what is required of the caller. E.g. the caller may want to await a function using some field of the `next` value and this is totally fine. Compare that to calling `map` with a lambda and wanting to `await` inside the lambda...the problem is you are now in the wrong scope and can't `await`.

- So where do generators come in? Well they are just syntactic sugar that will generate the state machine that you would otherwise have to implement by hand in your iterator. In that sense you don't need generators at all...

- BUT, with generators you can do things that would technically be possible with iterators but would be so clumsy to implement (I'm thinking here of coroutine libraries) that having them as a distinct feature makes sense.


There's two places I've used them in the past year where it was a natural fit:

* Querying solr for a massive amount of data using cursors, where the api and cursor use is hidden so I only have to process the result as a simple loop.

* Pulling csv data from disk and grouping by one of the columns, so the main loop is simply looping over groups without having to do that logic itself.

One of they key points in both versions is that the data can be too big to pre-process, and the system would run out of memory if I tried to load it all in at once.


Generators are great for building and consuming lazy iterators, and for building pipelines of lazy iterators. Lazy evaluation in pipelines can interweave work and can be more efficient than immediate evaluation.


Because it's an awesome and easy way to create iterables.

    class FreakyCollection {
      [Symbol.iterator]() {
        return function*() {
          for //...
          // iterate over your freaky collection however you please yielding one element by one
        }();
      }


This can be condensed slightly, you don't need the inner function literal. In Typescript:

    class MyCollection<T> {
     *[Symbol.iterator](): IterableIterator<MyCollectionEntry<T>> {
      for (const thingy of this.thingies) {
       yield { ... }
      }
     }
    }


... also custom and paremetrized iterators:

    class FreakyCollection {
       // ...
       *even() {
          // for ... iterate skipping the odd ones and yielding the rest one by one
        }
       *having(quality) {
          // for ... iterate and yield just the items that have some quality
        }
and usage then is as simple as:

    for(let item of freakyCollection.having(42)) {


Python is where I developed my (naturally flawed but possibly useful) mental model for generators: you are "pulling" data out of them.

Instead of pushing data into a processing grinder, watching the sausage links pour out the other side, whether you're prepared or not, you're pulling each sausage on-demand, causing the machine to move as a consequence of wanting an output.

I'm sure smarter people appreciate generators more than I do. They're useful for co-routines and other wizardry. But I personally just find the mental model more useful in some cases, especially knowing it keeps memory and CPU usage more in-line with my needs. Doubly especially if my generator may not have an upper bound.


I mean that's fine but the language decision that makes my small brain break is "why call it a function, instead of calling it a completely different name to represent the completely different things that it is"?


I never thought of that, and now I'm asking the same question.

Perhaps there's history or mathematical explanations to this. But, yeah, "A function that can temporarily pause itself and be returned to later" is a possibly confusing overloading of the concept of a function.


I honestly think the opposite.

When I learned generators from Python the understanding of “a function that yields instead of returns” was very easy to grasp. And considering I already knew how functions worked & what their syntax was this was just an extra step (rather than starting from scratch). ECMAscript have taken a similar path to Python here.

This could also be because my mental model is they are just functions that have the ability to return to the stack at a later point in time, rather than “running to completion”.


Yup, I can see that! One issue I have is that my model of the stack is that "it's a literal _stack_ of frames. You pop them one by one, executing them."

What happens to a frame that you pause? Does it get set aside? Is it still on the stack?


This article[0] and a clarifying comment[1] answer the question for JS. I think it's the same for Python. Apparently your model (which was mine, too) is out of date!

[0] https://hacks.mozilla.org/2015/05/es6-in-depth-generators/

[1] https://hacks.mozilla.org/2015/05/es6-in-depth-generators/#c...


This kind of thing shows up around several other related PL features, so much so that it has a name: https://wiki.c2.com/?SpaghettiStack


Well maybe it's my small brain but I remember in math class it was very important that "a function has only one return value", which probably is part of why I was confused.


Technically generators still only have one return value: an Iterable. That Iterable represents an object with a "next()" callback and you can entirely write by hand Iterables as simple dumb objects if you like. (The generator function syntax makes it a lot easier to build complex state machines, but most languages aren't going to stop you if you prefer to write your state machine by hand.)


The mathematical definition of function also rejects all side-effects, so probably not the best thing to use around programming.

Call it a routine instead?


We had a descriptive yet very generic term for function-like stuff: "subprogram".


Oh man that brings memories: gosub in basic (though I think it was subroutine). Also Pascal had procedures vs functions.


It makes sense in a low level language like zig, because an async functions there truly is a function frame (in the sense of set up a stack and save your registers to be popped off of it at the end) but in high level languages - and even rust, if my understanding of how async works in rust - it's different and calling it a function seems incorrect.


Why do you care what it "truly" is under the hood? The observable behavior is what matters, and in many languages, this is made explicit by some kind of "as-if rule".


For a low level language having a proper mental model is important. Suppose you need to inspect the generated assembly.


Anyone care to explain their downvotes?


FWIW I'm really appreciating your comments. These are all models. I really enjoy hearing how others perceive things.


Yep. The only way I've ever been able to understand generator/coroutine-based code is by desugaring it into iteratees into my head. (Before I learned about iteratees, I simply couldn't understand them at all).


This was killing me when back when I was a Python dev. A "generator" is just a callable object made less reasonable.


Even callable objects are kinda brain breaking. I used to joke that the duality of callable objects (and getter/setters, function prototypes in js) were as mysterious as the wave/particle duality in quantum mechanics.


I mean, if we're talking python here, functions are callable objects.


Good point.


Functions are objects too.


I used generators to increase the performance of bitstream parsing by several orders of magnitude over using a promise based system

https://github.com/astronautlabs/bitstream#generators


They were introduced when JS went through its "let's copy python" phase.

They're kind of crippled because generators only really become useful if you have the equivalent of itertools to build a sort of iterator algebra. I love generator-based code in python but it's hard to replicate that without having the library too.


That's a call out in the article, too. There's a Stage 2 proposal before TC-39 to add a bunch of useful library functions out of the box to the language. In the mean time there's an itertools.js "port" and also IxJS which mirrors RxJS (and .NET LINQ).

(I've used IxJS to good effect in JS apps.)


I've used JS generator functions at work. Part of it is redux-saga, which has already been mentioned, but part of it is another use.

We use generators to implement functions that can make requests to the caller for more information. So the caller f calls the generator function g, creating an iterator, and then calls iter.next(...) in a loop. If the result is not done then the return value is a request from g to f for more information, which f supplies. If the result is done then g has returned a value and f can use it.

The reason we do this is actually an architectural one. In this case, the extra information supplied by f to g comes from an external network, and g lives in an internal library that we don't want making any network requests or having any networking dependencies. My boss came up with this solution, and so far it's worked out pretty well for us!

Note that before we split things up like this, g did make network requests directly, so g was async and its code was full of awaits. After switching it to this new structure, the awaits just became yield*s. :) (And the actual network requests themselves became yields.)


that sounds really interesting. I've been doing a bunch of things and the flow is where the flow is irregular- sort of weird state machines. Putting it into a generator kind of things seems worth a try.


All you did was turn 15 lines of reasonably readable functional code (that, as another comment pointed out, can be done in 8) into 31 lines that were much harder to read.


The price of using simple examples is measured in pages of pedantry


I've found generators to be a great way to keep separate concerns separate. Now you can have a function whose job is to figure out what the next thing is, then consume it eslewhere as a simple sequence.

In the past I'd have a function accept a `visit(x)` callback, but then I had to invent a protocol for error handling and early cancellation between the host and callback.

The ability to just `yield x` and have done with it is a breath of fresh air.


In games, you can use a generator function for stuff that's supposed to take multiple frames. Saves you from either having to write a whole state machine system, or extracting all the state for every multi-frame action out into the object.


Yeah, for things split across animation frames it's great to have the ability to trigger .next() exactly when and where you want to, while still writing straight-line stateful code.

Await lets you do the latter, but not the former.


Wouldn't it be better to use async/await for that?


My main takeaway from this is that I need to find a place to buy TimTams in the USA.


Same - I think this might be a clever timtam add disguised as a programming article !


At least in Kentucky both Kroger and Target have them randomly in the imports section of the cookie aisle if you look for them. (Depending on supply chain presumably.) In Kroger it's often the hard to see/hard to find top shelf. They rarely get the fun or the especially good flavors, but they often have the originals.

I don't think Kentucky has an especially large Australian influence, so I assume that they distribute Tim Tams somewhat broadly nationally, but I don't know.


Strictly speaking, there's no reason anyone "needs" generator functions. Instead of pausing the function execution at each `yield` statement, you can just write a regular (non-generator) function that returns a struct including all the local variables used in the function, and repeatedly call the function with the returned struct as the first argument (or `reduce`). Admittedly this is just writing the exact mechanism of yield in a different way just to avoid using `yield`, but that's the point, it's not necessary to have it built in to the language.


For anyone trying to get a better idea of what parent means, the PHP Generators RFC[1] shows the equivalence between generators and iterators. Iterators are what OP means by "struct including all the local variables used in the function".

[1] https://wiki.php.net/rfc/generators


In a similar vein, closures are not necessary, since they can always be emulated by a global function with a data object to bundle captured state - which is still the usual way to do it in C, for example. But this sort of thing gets real awkward real fast if you do it a lot. Same thing with explicit iterators vs generators.


> they can always be emulated by a global function with a data object to bundle captured state

Only if you already have the infrastructure in place to pass that captured state through to where it's needed. For a C example, you can't (safely) use an "emulated closure" with qsort (e.g. if you want to write a function that takes a list and an integer, and sorts the list modulo that integer), because you have no way to pass the data object through.


You can kinda define generators with an arrow function, kinda:

   const foo = (function*() {
      yield 1;
   })();
   console.log(foo.next().value);
I've only written one generator in "real life" and ended up replacing it anyway:

  // A generator function that returns a rotating vector
   function* circlePath(stepsPerRotation = 60, theta = 0) {
     while (true) {
       theta += 2 * Math.PI / stepsPerRotation;
       yield [Math.cos(theta), Math.sin(theta)];
     }
   }


The main benefit of arrow function generators would be the inherited parent-scope this-binding they'd get


Yeesh! I personally stay away from "this" as much as I can, in part because of this effect. I can't imagine using the arrow function's odd "this" behavior intentionally! Plus, I like pure functions a lot so I'd rather just pass in any state through a parameter, which is less error-prone too.


Why is the arrow-function "this" behavior odd? It captures it from the outer scope, just like it does with all other local variables. This is exactly how it works in pretty much every other language out there.

It's the old-style "function" behavior that always determines "this" at the point of the call that's odd, if anything.


>Why is the arrow-function "this" behavior odd?

Because it behaves very differently from ordinary function "this" behavior.

Your statement about "this" being defined at the function call site is just...wrong.


It's literally how it works for arrow-less functions, though - "this" is bound to the receiver at call site:

   foo = function() { console.log(this.n); };
   o1 = { foo, n: 1 };
   o2 = { foo, n: 2 };
   o1.foo();  // 1
   o2.foo();  // 2
I can't think of any other language that does it this way. Arrow-functions, OTOH, behave as lambdas normally do:

   foo = () => { console.log(this.n); };
   o1 = { foo, n: 1 };
   o2 = { foo, n: 2 };
   o1.foo(); // undefined
   o2.foo(); // undefined


Neither of these code snippets demonstrate binding 'this' at the call site. In any event, I strongly suggest avoiding 'this' as much as humanly possible in javascript because a) it's wonky and b) you never need it.


In the first example, when you look at the first line

  foo = function() { console.log(this.n); };
You cannot possibly know what "this" refers to when the function is eventually called. Is it some object? Is it the surrounding class or function? Is it window? Is it undefined? Is it the function itself?

There's no way to know, because it hasn't been decided yet. Deciding what `this` refers to in the "normal"/non-arrow functions is up to the caller.

  o1.foo();
Here, `this` will be o1.

  o2.foo();
here, `this` will be o2.

  o1.foo.call("hello")
here, `this` will be `new String("hello")`.

That's what the parent commenter meant by "binding this at the call site".

Classes are still very common out there, despite the popularity of e.g. React function components etc, and `this` is pretty essential with classes.


Where do you think the binding happens in my example, then?


In line 3, where it's called

     })();
With this way of invoking, `this` will be `undefined` in "strict mode" (i.e. if the file or parent function has 'use strict' or if we are in an ES module), and Window in "sloppy mode".


I classify generators along with switch statements and while loops in JS. Perfectly fine to use if you know what you’re doing, but generally a code smell that belies further non-idiomatic code.


I prefer switch now over longer if else, and find my code to be much simpler. I stopped following most of what clean code recommends such as using polymorphism as a switch case or if else.


Many times, switch is used when an object would suffice


I'm sorry, but in my decade of JavaScript I never read anything more cryptic than this. What is this supposed to mean?


Not OP, but I've often seen cases where the same set of strings or enum values are used in switch statements in disparate parts of the code base. While each switch is supposed to be checking the same set of values, they invariably end up getting out of sync and checking different subsets of the actual possible set of values, leading to bugs.

Some languages support exhaustive switches (typescript has a way to do this), but oftentimes the better solution is to consolidate all of these switches into an object, or a set of objects that share an interface. That way all of the switch statements become function calls or property accesses, and the type checker can warn you at time of creation of the new object if you're not providing all of the functionality the rest of the code will require.


And of course, there is no free lunch. You run smack into the expression problem here.

The question is: do you have more places that check your conditions than types of options? Are you more likely to add a new check (and need to verify exhaustiveness) or add a new option (and need to modify every check)?


A switch statement allows you to have different behavior for different inputs, but often all that's needed is different output data.

If you have to map, say, country codes to country names, writing a long switch statement of case "US": name = "United States of America" break

Is going to suck. An object (in js, an associative array more generally) will be simpler.

It's not always quite so obvious as that example.


Thanks for the example! I guess we consider switch/case in different situations then. I usually make use of switches in simple mappings and switch/case seems more readable and idiomatic to me there:

  type CountryCode =
    | "CH"
    | "US";

  function nameFromCountryCode(countryCode: CountryCode): string {
    switch (countryCode) {
      case "CH": return "Switzerland";
      case "US": return "United States of America";
      default: // exhaustiveness checking
        ((val: never): never => {
          throw new Error(`Cannot resolve name for countryCode: "${val}"`);
        })(countryCode);
    }
  }
...instead of...

  type CountryCode =
    | "CH"
    | "US";

  function nameFromCountryCode(countryCode: CountryCode): string {
    return ({
      'CH': (() => 'Switzerland'),
      'US': (() => 'United States of America'),
    }[countryCode] || (() => {
      throw new Error(`Cannot resolve name for countryCode: "${countryCode}"`);
    }))();
  }


Your second example is not how anyone here is recommending using an object to replace a switch. You've still got the function call, which is now redundant.

Here's how you'd actually do it:

    type CountryCode =
      | "CH"
      | "US";


    const countryNames: Record<CountryCode, string> = {
      "CH": "Switzerland",
      "US": "United States of America",
    }
Your second example is way more complicated than your first, but this one is even easier to read (at least for me), and still provides all the same functionality and type safety (including exhaustiveness checking).


I'll bite: why the anonymous function throwing an Error in the first snippet?


I didn't lay out a bait =)

It would also look more readable to me with a default return value. An exhaustiveness check just keeps your mapping functionally pure and the type checker can catch it.


@lolinder

...you just removed the default case and just introduced undefined as return value at runtime, so it isn't the same functionality.


First of all, can you just reply? It does weird things to the threading when you don't.

Second, removing the default case is part of my point.

You were writing TypeScript code, not raw JS, and in my improved example the type checker won't let you try to access countryNames["CA"] until "CA" has been added to CountryCode. Once "CA" is added to CountryCode, the type checker won't let you proceed until you've added "CA" to the countryNames. The only situation in which a default case is needed is if you throw an unchecked type assertion into the mix or if you allow implicit any.

With implicit any turned off, this code:

    const test = countryNames["CA"]
Gives this error:

    TS7053: Element implicitly has an 'any' type because expression of type '"CA"' can't be used to index type 'Record<CountryCode, string>'.   Property 'CA' does not exist on type 'Record<CountryCode, string>'.


the reply button wasn't there and I assumed the reach of a depth limit... guess I just needed to wait a bit ¯\_(ツ)_/¯

Moving the case for a default value to the caller is a weird choice IMHO. Types should reflect the assumed state about a program, but we all know the runtime can behave in unexpected ways. Assume an SPA and the response of an API call was typed with CountryCode in some field, then somebody just worked on the API - I prefer to crash my world close to were my assumption doesn't fit anymore, but YMMW.

Your implementation (and safety from the type checker) only helps at build time and puts more responsibility and care on the caller. That implementation could prolong the error throwing until undefined reaches the database, mine could already crash at the client. Either TS or JS will do that.


> Assume an SPA and the response of an API call was typed with CountryCode in some field, then somebody just worked on the API - I prefer to crash my world close to were my assumption doesn't fit anymore, but YMMW.

Agreed on crashing, but I prefer to push validation to the boundaries of my process and let the type checker prove that all code within my process is well-typed.

Defensive programming deep in my code for errors that the type checker is designed to prevent feels wasteful to me, both of CPU cycles and programmer thought cycles. Type errors within a process can only occur if you abuse TypeScript escape hatches or blindly trust external data. So don't cast unless you've checked first, and check your type assumptions about data from untrusted APIs before you assign it to a variable of a given type.


It's not in JavaScript, but see Unifying Church and State: FP and OOP Together for another example of the "switch / named functions" (or "Wadler's Expression Problem") described in a pretty intuitive way: https://www.youtube.com/watch?v=IO5MD62dQbI


there goes my evening, that's what I like about HN. thanks for the refs!


>I'm sorry, but in my decade of JavaScript I never read anything more cryptic than this. What is this supposed to mean?

Switches in JS are just implicit maps.

  const caseVal = 'case1';

  const result = ({
   'case1': (()=> 'case1Val'),
   'case2': (()=> 'case2Val'),
   'case3': (()=> 'case3Val'),
  } [caseVal] || (()=> 'default'))()
Has identical performance to a switch case and is far more idiomatic.


Thanks for the example!

I assume the identical performance just comes from the optimization of the JIT, as allocating objects in the heap seems quite overkill for such a control flow. I only fall back to this when switch/case isn't available in the language, eg. in Python.

Is this a thing in the JS community?


Typescript supports exhaustive switches which are pretty powerful, especially when working with proper union types and the like.


I find using objects mapped as switchKeyToFunction to be more idiomatic and readable. e.g. shapeToDrawFn[shape] But to each their own


Will the compiler check that you have an exhaustive set of keys in your definition of `shapeToDrawFn`? (Not a rhetorical question, I don't know.)


Combine them with a decent coroutine library and you can write relatively straightforward singlethreaded concurrency code. Ramsey Nassr has been exploring that:

[0] https://merveilles.town/@nasser/107892762993715381

[1] https://jsbin.com/mupebasiro/edit?html,console,output


What's the reasoning for using this over promise and/or async?


Fair question! Well, look at the part in the code example in the second link where it goes:

    // then either cancel, drag/drop, or click
    yield* coro.first(
        /* ... */
    );
In this example that function takes three generator functions. Whichever of the three yields a value first "goes through", and coro.first() then aborts the other two. The resulting code reads a lot like how you would describe it:

"first detect a click on the rectangle, then either cancel the repositioning if escape key is pressed, move the rectangle if the mouse moves, or drop it if a click happens"

The structure is a lot more like the "if/else" kind of control flow structures that most people are more familiar with. On top of that it's deterministic (technically, single-threaded use of things like setTimeout also are but because of how you would structure this it is easier to reason about).

Another way to look at it would be to say that this way of expressing things aligns better with solving the problem in terms of state machines (and with UI that often is quit a nice approach).

This is know as the structured synchronous concurrency paradigm and it's actually quite nice for certain types of (singlethreaded) concurrency, especially complex UI events. Céu is a language that goes a bit deeper into this, as well as the Blech language (both targeting embedded contexts - button presses changing machine settings are places where FSM are a natural fit).

http://ceu-lang.org/

https://www.blech-lang.org/


I use generator coroutines pretty extensively in my game framework. It makes for some clean cooperative code that’s fun to write.

https://github.com/jbluepolarbear/Bumble


Hey, this is very cool. I might try making some minigames. Thanks for sparking that interest in me again.


Hey that looks nice, but looks also like it had no activity for five years (I guess it was stable enough?)

Do you have a link to games written with it too? :)


Yeah, it’s fairly stable for what I use it for. I mainly use it for game jams and have done a few professional projects with it. Here’s an asteroids game I made with the framework work: https://github.com/jbluepolarbear/Bumble-Asteroids

Here’s a playable link: https://www.jeremyiscool.com/Bumble-Asteroids/index.html


Nice!


The author mentions iterator helpers [1], which could make stuff easier.

However, there is also a different proposal that touches different similar issue which might fix this too: The bind operator proposal [2].

Like the name implies, it allows to set the `this` for a function call. This opens the possibility to implement the common map/filter/reduce functions in a lazy manner _for arrays_. Taken from the samples, this could evaluate lazily on an array returned by `getPlayers()`.

    import { map, takeWhile, forEach } from "iterlib";

    getPlayers()
        ::map(x => x.character())
        ::takeWhile(x => x.strength > 100)
        ::forEach(x => console.log(x));
Of course, this could also be used for iterators. However, the binding operator is not very active any more.

[1]: https://www.proposals.es/proposals/Iterator%20helpers [2]: https://github.com/tc39/proposal-bind-operator


The pipeline operator [1] is a bit further along, although I'm not exactly holding my breath for it either. At least it currently has a champion. With pipelines

    import { map, takeWhile, forEach } from "iterlib";

    getPlayers()
        |> map(%, x => x.character())
        |> takeWhile(%, x => x.strength > 100)
        |> forEach(%, x => console.log(x));
1: https://github.com/tc39/proposal-pipeline-operator


I found async generators to be handy while porting a C Prolog interpreter to WASM/JS. A `for await` loop is a natural way to iterate over query results, and as a bonus you can use `finally` within the generator to automatically clean up resources (free memory). There are ways for determined users to leak (manually iterating and forgetting to call `.return()`) but I've found that setting a finalizer on a local variable inside of the generator seems to work :-). I can't say manual memory management is a common task in JS but it did the trick.

The generator in question, which is perhaps the gnarliest JS I've ever written: https://github.com/guregu/trealla-js/blob/887d200b8eecfca8ed...


Transducers. See Richard Hickey's talk about transducers here: https://www.youtube.com/watch?v=6mTbuzafcII

You can implement transducers in JavaScript using generators. It was somewhat mind bending but fun.


So I keep seeing Sinclair's blog posts both on /r/javascript, and this is the first time I see it on Hacker News.

While I appreciate the effort needed to write one of these posts, I can't help but think about the amount of bad code that these blog posts are inspiring.


If generators were treated as first class concerns in JS, they'd be so much more useful, but for them to be useful today you have to do alot of work to make it so in my opinion. We need iterator helpers to make them more useful built in to the standard language.


> Arguably, Australia’s greatest cultural achievement is the Tim Tam.

After eating half a packet with my cuppa this morning (and feeling somewhat queasy for it), I can confirm timtams are one of our finest accomplishments.


I do wish that JS had more builtin syntactic sugar around generators (and particularly async generators). They would be a lot more convenient and syntactically cleaner if most of the Array builtins could also be used for iterators. For async iterators the builtins would need to leverage the promise builtins as well, so you could to something like:

  const result = await asyncIterator.map(doSomethingAsync).reduce(asyncAggregatorFn)
Without ever having to load all of your data into memory.


You might be interested in what IxJS has to offer:

https://github.com/ReactiveX/IxJS

RxJS is for observables. IxJS is for iterables. Or, a bit more accurately, as explained in the readme:

   IxJS unifies both synchronous and asynchronous pull-based collections, just as RxJS unified the world of push-based collections. RxJS is great for event-based workflows where the data can be pushed at the rate of the producer, however, IxJS is great at I/O operations where you as the consumer can pull the data when you are ready.



There's an active proposal which would let you do exactly that, going for stage 3 (ready for implementations) at the end of the month: https://github.com/tc39/proposal-iterator-helpers


You can do something like that with rxjs.


You can also implement context managers / try-with-resources via generators. Not that I've seen it in the wild or recommend it. I remember it being annoying to debug for some reason.

https://stackoverflow.com/questions/62879698/any-tips-on-con...


For a concrete example, Relay uses them to garbage collect data in an asynchronous fashion. If an update is detected during the course of a gc, it is aborted.

https://github.com/facebook/relay/blob/main/packages/relay-r...


Generators are a fun use-case for bthreads since they allow to "cut" through other modules and control their execution: https://medium.com/@lmatteis/b-threads-programming-in-a-way-...


I started using async generators years ago and using for-await-of loops to consume streaming data and be guaranteed that the processing order is maintained when there are async operations to be performed on each chunk of data. It's difficult to do in any other way; you'd need some kind of queue structure which would make code very difficult to read.


I really like using generators in test code for setting mock/stub.

You can be really specific about how you want it to run, without having to memorize a new language of function calls that have to be triggered in specific orders


Async generators should be a very natural interface for many event-based APIs, e.g. WebSocket.onmessage. Unfortunately converting a callback-based API to an async generator is highly nontrivial.



I use them for reading large files that need to be parsed.

A token might require a complex decode, so the generator fires once per token, but keeps the state of the file structure and the parser.

It greatly simplifies the code.


I rarely use generators but used one the other day.

I had some code that needed to deal out the next in a sequence of URLs, to functions that were triggered by events occurring at random times.


I used generator to iterate over AWS objects with automatic paging. Code was very compact and easy to read and write. Other approaches would be worse.

I don't think I ever used it again.


It’s silly how the ‘async’ function modifier got a keyword before the function keyword and the generator modifier got a symbol ‘*’ after the function keyword…


> Some people might find the Australian and British accents difficult to understand.

Or even recognise; Graham Norton has an Irish accent, not British.


I use them to iterate over structs in WASM memory that are neatly packaged as an "object", but are just getters into the current offsets.


Block /assets/book-bg.jpg or remove the background from body::before to make it readable. Article still probably about JS.


Halfway through the article I completely forgot that it was about JavaScript, and I could only think about Tim Tams


Paging, Streaming, Loops, Coroutines between asynchronous systems, queues, and state machines, emulators, etc.


Can I ask what kind of Tea goes best with Tim Tams? Are we talking about just simple black Earl Grey here?


> Repeat until there are no more Tim Tams, or you feel physically ill.

Similar to consumption of generator functions.


I like generators and using them in for (... of ...) {...} loops. Reminds me of Python joy.


Have you ever seen the rain?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: