Hacker News new | past | comments | ask | show | jobs | submit login
Purely Functional Retrogames (2008) (dadgum.com)
87 points by adgasf on July 25, 2016 | hide | past | favorite | 29 comments



> An open question is "Is being purely functional, even excepting I/O, worthwhile?" Or is it, as was suggested to me via email earlier this year, the equivalent of writing a novel without using the letter 'e'?

This is a great question. I've been playing with pure functional programming, and I also noticed that applying it to video games is very hard. In other areas, FP provides many advantages such as testability and safe parallel execution. But for simple games these are not that useful. Simple games seem very well suited for the imperative, global state model.

There is one aspect that might make FP worth it for games and it's interactive programming. The immediate feedback loop described by Bret Victor [1], with the possibility of going back and forth in time is easier to achieve with a pure functional approach.

The Elm debugger [2] is an implementation of this concept, using hot swapping and functional reactive programming it's possible to design games interactively in a very unique way that it's hardly achieved with the traditional imperative approach.

[1] https://www.youtube.com/watch?v=PUv66718DII [2] http://debug.elm-lang.org/


A few notes:

- Elm dropped frp [0]

- Even in it's older frp incarnation, elm didnt allow changing of the flow graph (sorry I cant remember the correct elm term for this) while the program ran [1], making it hard to say this would be an ideal way to go for true 'interactive coding' for games.

To clarify my opinion I would like to add that I am assuming an ideal of 'interactive game coding' where one would start a GL context & repl session at the start of the day and make the game whilst never closing the session or killing the context.

The OP limited himself to retrogames which is a wise move for exploring this topic, _one_ of the things that makes writing modern games hard is the extent to which you are pushing what the hardware can do. It is normal to be required to really be strict about what happens when in each frame and it's not possible to do this if, part-way through rendering you have to process events coming from some input thread that has no idea about the context your program is in.

Lastly I would add that your program itself is state. Interactive programming requires you to create and remove functions from your running program and bugs particular to doing this can occur (some object has a reference to a version of a function you have removed etc etc). Unless we move to a point where our code (compiled & text) live in a some persistant data-structure that allows rollback in some meaningful way I find it hard to imagine _real_ functional interactive programming.

This is a cool topic and ripe for discussion and progress, but just dropping in interactive programming does not in itself make game coding easier. However it is a pile of fun.

[0] http://elm-lang.org/blog/farewell-to-frp [1] https://www.youtube.com/watch?v=Agu6jipKfYw


> But for simple games these are not that useful. Simple games seem very well suited for the imperative, global state model.

I think this indicates a bias -- resulting from years of programming games in these models.

I think pure FP in the form of FRP is a far better match for simple games than the imperative, global state model.


> But for simple games these are not that useful. Simple games seem very well suited for the imperative, global state model.

Could you elaborate? I think this is one of those things we "know" which isn't necessarily so. Simple sprite-based games can be written using purely functional techniques (see: https://github.com/nikki-and-the-robots/nikki). It's just that it's not common to use FP languages for this.


Yeah, I know you can develop games in a purely functional fashion. My point what is the advantage of doing so, given that most resources are developed for an imperative way. I think the interactive approach taken by Elm is a very good advantage of FP for games.


It seems at least some of the difficulties described are managed better in Haskell.

For example, rebinding to the same name (as an alternative to destructive update) can be handled conveniently and nicely via state monad and the lens library, with syntax roughly like:

  pacman.position += speed
This is still purely functional because the resulting type is roughly: State Game () which is equivalent to: (Game -> Game) (a pure function that takes a game and returns a modified game).


> This is still purely functional because the resulting type is roughly: State Game () which is equivalent to: (Game -> Game) (a pure function that takes a game and returns a modified game).

This is the kind of arguing that could easily be extended to C and other "impure" languages. And even the IO monad is "pure" -- it's just values containing computations executed by the runtime (i.e. the computational side-effects are not part of the language / type system)

Using the State Monad for += syntax is no better than doing the equivalent in C.

The real enemy is accidental state. That is, meaningless (but significant) dependencies between conceptually independent calculations. For example:

   pacman.position += speed
   pacman.lives = ghostAt pacman.position
While updating the position based on the speed and deciding whether pacman was hit by a ghost are conceptually independent, the outcome of the above code is dependent on the order of these lines.

Instead it's often cleaner to do something along the lines of

   pacmanLives :: Pacman -> Ghosts -> Bool
   pacmanNewPosition :: Pacman -> Position
   -- use both of these functions in the update routine.
   -- Hand them the pacman from the last frame
   -- (i.e. don't "chain" the functions)
This way a dependency on the "order" of these computations is "avoided" (kind-of. There is still an order between the frames, but that is not as "accidental").


You cannot implicitly read pacman.position as a subexpression like in C, because it is an effect, and the order of effects has to be explicit in Haskell.

So it would have to look like:

  pacman.position += speed
  pos <- pacman.position
  g <- ghostAt pos
  pacman.lives .= g
Though a dead Pacman probably has no position, so it makes more sense that the last line is:

  when g (pacman .= DeadPacman)
Messing up the order because of implicitness becomes much harder.


What you say (re: subexpression) is true in Haskell. (I was of the impression that "pacman.position" wasn't valid Haskell anyway, but now I think it is. Never had a serious interest in lenses).

However that's a cosmetic issue. The point still stands: If you chain ("use the State monad"), the outcome depends on whether you update the position or the liveness first.


It does, but unlike impure languages, this ordering is not implicit in the evaluation order -- but explicit.

In the same sense, if you compose the pure functions:

  updatePacmanPosition . checkPacmanAlive 
vs:

  checkPacmanAlive . updatePacmanPosition 
Will exhibit the exact same issues.

The state monad merely lets you more conveniently write it more like:

  do
    updatePacmanPosition
    checkPacmanAlive
Where the do stmts are (sort-of) composed together as functions.

So the state monad here is not the issue -- it is sugar around function composition.

The issue is whether the ordering of these manipulations is explicit via data-flow or statement ordering, or whether it is implicit in evaluation order applying to shared, mutable state.


Yes, that was actually my point. State is essentially function composition. Thus I'm arguing for avoiding dependencies between conceptually independently compositions.


It's true until the composition and data flow are hidden from view because they're implicit in the shared mutable state. Also aliasing issues don't occur with the composition as they may occur in mutable state.


They are equivalent if you place your entire program into some StateT a IO monad, where a is the contents of the entire memory accessible within your program.

That's to say, you do have a point, but both too big states and too little pure code are well recognizable smells. (I' not sure they are avoidable, but surely are smells.) On that sense, just by being easily recognizable they are already better.

I've been writing a small game in Haskell, and it has been funny to abuse monads and type classes for all sort of architectural features. But it's impossible to look at the code and mistake it for a well organized one.



Maybe it's just me but I find the part about structures a bit silly. So let's get rid of the big fat structs, and replace them with tuples. Nested tuples. How is that any different than a struct, apart from immutability? And it is worsened by the fact that the fields now are just indexes in a tuple/list instead of being named.


I agree that it's good to name the fields (i.e. use records or similar). The important idea is going from a big flat struct to a hierarchical one where related data is grouped together, so that you can have functions that operate on small subsets of the data. This gives you visibility over which functions touch which parts of the state, and makes those functions a lot more testable.


And having nested structures achieves the same.


Of course.


I think I'd go for records over tuples, so that the values can be named.


If you're using a record, you're effectively just using a struct again...


Mathematically, structs and tuples are virtually equivalent. They are both examples of the same fundamental data construct: Product types (i.e. grouping multiple values together)

The other two most important abstractions are sum types, i.e. unions or alternatives, and exponent types, i.e. function types, and IMO they're the two that are far more important for the success of functional programming.


I think that one depends on the language. In F#, for example, records are pass-by-reference and structs are pass-by-value.

In a language-unspecified setting I like to follow the convention that structs are implied to be mutable data types and records are implied to be immutable data types. I don't know how widely-used that convention is, though.


Probably immutability is the point.

Whether you want to name record members depends on the situation, but not having to name them consistently throughout your project (and instead being able to give names on pattern match) is often advantegeous. Look at the mess that is Sql Queries because you have to juggle all those names when joining.


I think the design of this part depends on how well your functional language of choice implements immutable data structures, from a performance and memory consumption (and by inference, garbage collection) perspective.


This is my pure-functional Katamari "game" (more of an outline / example, than a full-blown game):

https://github.com/fccm/OCamlODE/blob/master/examples/katama...


This is cool. A lot of people tell us how great functional programming is (I can see the benefits of avoiding state not having side effects). But it is rare to see an article that shows how you go about designing something in that style.


> Back when I actually wrote 8-bit games, much of my code involved updating timers and counters used for animation and special effects and so on. At the time it made a lot of sense, given the limited math capabilities of a 6502. In the modern world you can achieve the same by using a single clock counter that gets incremented each frame.

So just to demonstrate an IMO elegant approach for the destructive style for updates that depend on current time values: for each update, pretend all computation happens in an instant, specifically: at the moment an event triggered an update (this could be the central game loop, user input, etc). Doing so makes it easy to keep everything synchronised; it's the abstraction Céu uses to great effect for its synchronous concurrency model[0]. The only limitation is that you need to keep your updates small and fast enough to finish before the next update is triggered, but that's a good goal to strive for anyway.

Pretending all computation happens instantly let's me use two values: a timestamp referencing to global time, and time difference with the previous time this update was triggered. Counters that get updated every time this update is triggered just increment by time difference. Counters that don't always get updated (or if different events can trigger the update) have to store the timestamp of the last update they were called, compute their own time-difference on the fly with a simple subtraction, update their value and store the new time-stamp. Very simple.

Using timers just means checking if the updated value is more than a certain value, and if so subtract that value and call another function. Basically, incremental error algorithms[1], like it is applied in Bresenham's Line Algorithm[2], but for timers instead of slopes.

(Note that this could all be written in a functional style with functions that take a current counter value and current/previous timestamp, returning a new counter value, but at that point it just feels like writing functional code imperatively)

I often use this in my Arduino code. It speeds up my programs, because calls to millis() are not as cheap as referencing a global, and all we need computatationally with this approach are addition and subtraction, avoiding multiplication, division and modulo. More importantly it makes it much easier to reason about what is happening, because if all updates reference the same time-step, side effects are much more predictable.

The aforementioned Céu language really takes this style of concurrency to heart in a very elegant way.

[0] http://www.ceu-lang.org/

[1] https://en.wikipedia.org/wiki/Glossary_of_computer_graphics#...

[2] https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm


>> Doing so makes it easy to keep everything synchronised; it's the abstraction Céu uses to great effect for its synchronous concurrency model[0]. The only limitation is that you need to keep your updates small and fast enough to finish before the next update is triggered, but that's a good goal to strive for anyway.

I think the usual way games typically implement more or less what you are describing but without the hard-realtime update constrains, is what is commonly referred to as a 'fixed timestep'.

It's what I used to implement the main loop of the game I'm currently working on, which has an 'action replay' function that has to be capable of producing the exact same gameplay sequence based on recorded input events, which means 100% accurate, discrete and predictable updates to the game state. It's pretty cool to record some gameplay on actual hardware, then replay it in the simulator, which has completely different performance characteristics, and get a pixel-perfect reproduction of the gameplay clip :-). Extremely useful for gameplay debugging and prototyping as well.

There's a nice article here [1] that explains the concept much better than I could, so I'll just refer to that instead ;-)

[1] http://gafferongames.com/game-physics/fix-your-timestep/


> It's pretty cool to record some gameplay on actual hardware, then replay it in the simulator, which has completely different performance characteristics, and get a pixel-perfect reproduction of the gameplay clip :-). Extremely useful for gameplay debugging and prototyping as well.

Yeah, determinism in programming seriously rocks!

BTW, I think what I described was the variable delta time, not the fixed time step. That semi-fixed time-step looks pretty interesting too!




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: