Hacker News new | past | comments | ask | show | jobs | submit login
Fear of Macros (greghendershott.com)
83 points by ranit on June 10, 2015 | hide | past | favorite | 56 comments



After a few years of using and abusing Ruby's meta-programming, I've come to understand it as an anti-pattern, you incur technical debt when you use it and should refactor it away as soon as you know what you're trying to do with it.

It's one thing to enshrine it as a part of the language, like C++ templates and the C preprocessor, because that forces you (hopefully!) to think about things like maintainability and concern separation.

It's quite another to ad-hoc what amounts to a language extension to paper over quick-and-dirty design. And there's not much middle ground between the two. I'll ask myself the question, "do I want to extract this into a gem and maintain it separately?" and run pretty quickly into the normal coding equivalent of "am I making a game engine or a game?" My vision has to be pretty darned awesome for the answer to be "both."

Meta-programming is great to have, having access to a credit card is way better than not having one if you have the discipline to use it properly. It's essentially why I stick with Ruby rather than move to a statically-typed language, reflection and duck typing are tremendously useful when you don't really know what it is you're building yet but need to execute quickly. You can move at the speed of thought.

But I still consider it an anti-pattern. As the system matures I refactor it into plain-old boring Ruby, easily replaceable with Go or whatever if the need presents itself. I also like to reduce dependencies on libraries whenever practical.


Ruby's metaprogramming features are a far cry from Scheme's hygienic macro system. By saying metaprogramming is an anti-pattern, we're saying that us lowly programmers ought not to put on our language designer hat and let the professionals take care of it. Without syntactic abstraction, we're at the mercy of the language designers and waiting for the new language version to include the new syntax we want. And what about the syntax we'd like to have that isn't so generalizable to be included in a core language implementation?

For example, it's great to finally have a 'let' form in JavaScript, but any one of us could've written it ages ago if there was a macro system, but instead we've been writing (function(x, y) { ... })(z, w) which is the ugly version of what 'let' essentially expands into.

I've written small embedded domain specific languages for coroutines and functional reactive programming. They are small bits of syntactic sugar that make talking about cooperative multi-tasking or values that change with time much more natural, and I didn't have to invent a whole new programming language to do it. A more well known example of an EDSL is miniKanren, a language for relational/logic programming embedded in Scheme.

PG's "beating the averages" essay argues that you should be using the most powerful language you can. The most powerful languages have macros. Good macros allow you to build up your programming language to be tailor fit to your problem domain. Let's not write off a powerful tool because it's possible to write bad code with it.


For that matter, I actually have invented an entirely new programming language mostly with just Racket macros. It's an incredibly powerful tool for experimentation and research as well, something the PLT folks have been using to their advantage for some time now.

I'm currently in the guts of several different candidates for building a basic Lisp on Risc OS, and it's really struck home just how much of a lowering of the barriers a really first-class metaprogramming system is when it comes to language development.

Building everything from scratch takes time and often, a lot of repeat effort on stuff that's already been written a dozen times before. Macro systems, especially weapons-grade full-blown language development kits like what Racket can do, are a pretty hard thing to argue with on that front.


Dick Gabriel said in a podcast that late 70s language research was mostly lisp with macros. And when the PhD student was close to the finish line, he'd wrote a concrete syntax so it looked like a real-er thing.


> By saying metaprogramming is an anti-pattern, we're saying that us lowly programmers ought not to put on our language designer hat and let the professionals take care of it.

I don't agree.

For example, most of us here would probably classify manual memory management as an anti-pattern. We're capable of doing it; we do it when we must. But we recognize that, most of the time, it's a bad idea.

Perhaps vinceguidry is saying something similar about metaprogramming.


most of us here would probably classify manual memory management as an anti-pattern

Er, no?


An anti-pattern is not a synonym for "bad code." Too many in this thread are making that mistake. A software pattern is a tool used intentionally to help better structure your classes. It's usually well-studied by people smarter than you are, the edge cases accounted for and documented.

An anti-pattern is a solution you find yourself using unintentionally that you notice is causing you pain later on down the line. The first 10 times or so you just fix the pain, you don't start noticing it until instance 20 or so. If you were to study that anti-pattern, you could figure out why it's causing you pain, and maybe even solve the pain points and turn it into a real software pattern.

If I'm doing exploratory programming that I don't expect to have to support or maintain, sure, I'll use all the power tools I can get my hands on to do cool shit with. But when I'm working on systems I don't care personally about, I want as little friction as possible between problem and solution. I don't want to have to look at a piece of code and wonder, "WTH is that doing again?" I notice myself having that reaction far more when there's metaprogramming involved than when there's not.

It's not that there's no way to use metaprogramming effectively or maintainably. It's just that I have to think that much more carefully about proper separation of concerns, how the messages are passing, what happens when there's errors, stuff I don't have to think about when I'm doing it the boring way. When something goes wrong in boring Ruby, the backtrace is all I need to figure out what's going wrong.

I've had the experience way too many times, of, over a dozen or so refactorings of a piece of metaprogramming I was too proud of to rip out, finally realizing that there was a bit of state I could have passed in a perfectly normal fashion that would have completely eliminated the need to, say, do dynamic class generation. Had I not been so hung up on my own brilliance, I could have saved myself hours of effort.

Likewise, if you find yourself reaching for manual memory management, that is, rolling your own management structures rather than using one of the countless well-tested constructs available, without really understanding why, then you're probably overlooking some library you could use that would do exactly what you want it to do and handle all the myriad edge cases had you given it some thought. I would absolutely consider manual memory management an anti-pattern unless you're working in embedded environments or demoscene, and sometimes even then. C is way too mature for you to be reinventing things.

Programmers are some of the most masochistic professionals I've ever met. They don't want to do things the easy way, easy seems to equate to stupid for them. You're not Linus Torvalds, the stakes just aren't that high. An .8kB in superfluous code loaded up in memory alongside the .2kB of the library you do need isn't going to kill you, only keep you sane. Don't make unnecessary work for yourself. Finish up early and go home and spend time with your kids.


I think you just picked a bad example, though.

To be specific, I find it an antipattern to think that memory or other resources can be pushed behind some lexical tricks. You still had better be able to account for your resources, in all places. Having first class control over them is often a matter of when you will need it, not if.


This is a classic case of overgeneralization which we programmers tend to do. Depending on the subfield that we work in (and sometimes the concrete problems), manual memory management can range from totally useless to always necessary


In other words, not an anti pattern.


And again, you're just doing it wrong. DSLs are perfect for exploratory programming, since backtracking and refactoring are so much easier.


aka, "Try not to worry your pretty little head with it if you can avoid it". This post has a high condescension density.


It's funny, javascript had no standard library, and the prototype semantics meant you could override anything at the `evaluated` level. It was very convenient to be able to enhance base objects. I liked lisp/scheme macros right of the bat for their structural cleanness (as opposed to C-like macros of course) but seen in the js way, it makes you realize how much you can now override/extend in that system.


> By saying metaprogramming is an anti-pattern, we're saying that us lowly programmers ought not to put on our language designer hat and let the professionals take care of it.

That's not what I'm saying at all. What I'm saying is that use it like the tool that it is, and clean it up when you're done with it. Responsible use, just like you should exercise with Scheme.


Hmm, the tone of your post lead me to think that there was no responsible use for metaprogramming, since you actively work to remove it from your codebase. I don't think it's just a productivity hack, it's a long-term maintenance, comprehension assistant.


In my experience, Lisp-style macros are much less error-prone than the Ruby/Python-style meta-programming. I think it is probably because macros transform the source code at compile-time while Python's meta-programming happens all at run-time. Errors are caught earlier and it is easier to build a mental model of how a macro accomplishes what it does (you can just expand and read the generated code).

Debugging or understanding Python code that uses meta-programming is usually no fun at all, but I have had much less problems understanding Common Lisp code that uses macros. It doesn't help that the meta-programming facilities of Python aren't very well designed (more like an accident of the implementation than a real design). If you're interested in a beautifully-designed approach to meta-programming (reflection and reification), 'The Art of the Meta-Object Protocol' (Kiczales et al.) is absolutely wonderful. I don't know enough about Ruby's meta-programming to say if it's better designed than Python's.

Of course both macros and meta-programming can be abused, like just about any language feature can (there's a series of mails that Guy Steele wrote to that effect as a reply to 'Macros make me mad': 'Classes make me mad', 'Procedures make me mad' some years back on the ll1-discuss mailing list).


(there's a series of mails that Guy Steele wrote to that effect as a reply to 'Macros make me mad': 'Classes make me mad', 'Procedures make me mad' some years back on the ll1-discuss mailing list).

I happen to have them bookmarked!

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


Brilliant. Guy Steele is very clever. Also, I like this list of macro writing rules in the last thread:

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


Macros do not necessarly transform code at compile-time. A late-binding interpretation is possible. See for example http://programmers.stackexchange.com/a/274200/33157.


Even in an interpreter, they're expanded at "Load Time" as opposed to runtime.


I am not sure, but it seems that this may be implementation-dependent.


I suspect you might be viewing the ideas of macros and meta-programming at slightly the wrong abstraction level. Macros are not merely about making code more legible or terse -- I would indeed agree that is a pure anti-pattern. Depending on the nature of your project and team, creating a series of DSL macros is almost certainly more work than it's worth.

However, there is one usage of macros I will never give up, and which has turned me off from any languages which cannot do meta-programming: macros as pattern definition. The "canonical" example of this would be resource management. Languages like C# or Java have the try ... finally syntax, and while this is great to ensure you don't accidentally leave file handles open, it only takes forgetting once to have resource issues. Macros provide the easiest to implement answer: just write a little "with-file" macro wrapper for the proper calls, and you don't need to think about the problem of open file handles again.

In fact, these sort of "mini-macros" are best for addressing the problems of concern separation, legibility, maintainability, and refactoring: the "with-file" macro is a prime example of separating the concern of managing file handles from the code which uses the file. You either won't want to refactor the way files are handled, or you'll want to do it everywhere at once -- a well-defined macro eases this effort. Finally, the questions of legibility and maintenance are best "shunted" to properly documenting the macro as you would any other function in your project (e.g. use of ruby-doc or other IDE annotation features).


> Depending on the nature of your project and team, creating a series of DSL macros is almost certainly more work than it's worth.

Initially - maybe. But it reduces the maintenance cost significantly. You'll write more code in the first place, yes, but it will be a much simpler code than a denser ad hoc solution, it will serve a better documentation for the problem you've been solving, and it will be easier to read, debug and change in the future.


I don't understand your example.

Why is it necessary for with-file to me a macro? It seems like any language with lambdas could write it as a higher-order function that asked for a consumer.

How does having a with-file macro fix the "what if you forget it?" issue? It seems like you can forget with-file as easily as you can forget `using`.


The with-x style macros can indeed be replaced by functions that take a closure. E.g. in Scheme you have the function (call-with-input-file string proc).

The macro is slightly more concise. Contrast

  (with-input-file (x "x") (print (read x))) ; Common Lisp
with

  (call-with-input-file "x" (lambda (x) (print (read x)))) ; Scheme


No runtime consing of closures either! In fact, in Common Lisp there is a style of writing call-with-foo that takes a closure and wrapping that with a macro with-foo. Since the macro abstracts that detail of having call-with, it is much more flexible. For example the macro could expand some stuff known at compile time and do some compile time computation too., instead of the closure consing function.


I feel pg314 did an adequate job answering your first question -- although I would have explained it that any meta-programming macros must be, by definition, higher-order functions. The only differences are giving the consumer function inline syntax and partial execution at compile time: both potent benefits.

With-file might not be the very best example of "what if you forget", because it's so very simple. A more complex example -- where "forgetting" might be more than just a single function call -- would be handling semaphores and locks for threading. A well-written "with-lock" macro could ensure that these resources are obtained and released in a determined order (i.e. compute some sort of hierarchy for locks and enforce this when obtaining multiple locks).


Just as a side point, Java 7 and above has a try block with special handling of resources [1]. That gives you most of what you want, except that a lot of older Java code isn't using it yet.

(Then again, the same would be true of a macro if it were fairly new and there's code that hasn't been converted to use it yet.)

[1] https://docs.oracle.com/javase/tutorial/essential/exceptions...


A good language should have the generic features necessary to do that kind of resource management without needing macros. See e.g. https://github.com/jsuereth/scala-arm - and note that it's all just ordinary code.


Macros are not needed. You only need the UNWIND-PROTECT special operator, which is the generic feature you are looking for.


In a persistent, image-based system where you can edit and debug programs in the very process in which they're running, metaprogramming is a necessity, not an anti-pattern. You couldn't implement Pharo's class browser, debugger, inspector, or most of its other tools without metaprogramming. Because of metaprogramming, these tools (even the debugger, thanks to context objects) can be implemented without the need for special access to the VM or the use of FFI.

I don't want sound too harsh, but Rubyists really seem to have this annoying tendency to speak on certain subjects like OOP, metaprogramming, or dynamic languages generally as if they're authorities on them just by virtue of being Rubyists, when in fact Ruby and Python both are actually rather poor representatives of these concepts, at least compared with Smalltalk or (even more so) Lisp. It would serve us better in these discussions to use the languages that best represent a particular paradigm or concept (like metaprogramming) when considering its merits, just as we would likely use Haskell or Idris (or something more esoteric) when considering the merits of static typing or pure FP.


TBQH I think most of the instances of ruby metaprogramming are bad, increasing complexity dramatically without real benefit. Ruby "rediscovered" metaprogramming without learning the lessons of the Lisp community.

Macros are great, though. It makes some things very simple that are otherwise unreasonable. They just need to be handled with care.


That's cool, but this isn't about Ruby. Racket is a different language, and the lessons you've learned from Ruby don't apply there.


Meta-programming is meta-programming, whether your language makes it easier to use or not doesn't change what it is. Playing with the syntax of a language forces you to have to maintain it yourself and creates a burden on anyone else who has to read your code. Same whether it's Ruby, C, or a Lisp dialect.

Our minds can only handle so much abstraction at once, and meta-programming introduces a completely new level often without warning. Even going back and reading your own code can be a chore unless you take the time to carefully design your language extensions and set up workflows for debugging through them. The design / workflows need to be re-understood every time you touch the code.

I am aware of Lisp supermen that no longer code like mere mortals, but rather in a dizzying edifice of macro expansions. It impresses me only as a technique for acquiring job security, and is only mildly more impressive than getting an employer to agree to Lisp in the first place.


Meta-programming is meta-programming, whether your language makes it easier to use or not doesn't change what it is.

Not really. What Rubyists call "meta-programming" is not what Lispers call "meta-programming." Things like mutating already-existing classes and asking an object what methods it has do not belong in the same category as syntactic abstraction (except maybe if you're programming in Kernel). That they have both been labeled "meta-programming" by their respective practitioners is just a historic accident.


> Playing with the syntax of a language forces you to have to maintain it yourself and creates a burden on anyone else who has to read your code.

You're doing it wrong. eDSLs are designed to simplify the code and make it easier to read it and maintain it. If it is not your case, then you did not design your eDSLs properly.

> Our minds can only handle so much abstraction at once,

Our minds can operate with a very limited set of entities at once. Abstractions are absolutely essential for reducing complexity. Of course you must stay on a same level of abstraction, and you cannot really do it without metaprogramming. You can design a nice, clean DSL which only deals with the problem domain level of abstraction, does not expose anything irrelevant from your host language.

Without this all your abstractions are leaky, and you'll see the host language peculiarities through your problem domain, making it harder to grasp all at once.


A DSL is something you have to design and maintain. The simplifications do not come free. Things leak through DSLs all the time, it's not a magic bullet. I've made and ripped out a bunch of DSLs, now it's a tool I only reach for if there's a clear need for it.


Designing DSLs is cheap and easy if you have a proper metaprogramming. It is as simple as picking up a number of language features from your toolbox and mixing them together in a simple, declarative way.

Maintaining DSLs is also much, much easier than maintaining essentially the same thing, but mutilated and spread in thousands lines of application code, mixed with a number of lower and higher levels of abstraction.

> now it's a tool I only reach for if there's a clear need for it.

There is almost always a need for a DSL. Pretty much any problem is better solved with a DSL, as long as it's something more complex than a "Hello, world".


I posted a response to a similar comment somewhere else a while back, which I'm copy/pasting here:

It could just be my experience, but from what I’ve seen, a lot of Lisp coders treat advanced macro programming like magic. It’s one of those things you “can” do, but rarely “should,” precisely because it’s so easy to misuse and potentially difficult to understand. There are some popular libraries (like Iterate) that use macros to do far out stuff, but most people write macros themselves pretty infrequently. Even Paul Graham, in “On Lisp” suggests writing macros, “when nothing else will do,” like when conditional evaluation is required.

On the other hand, my experience with Ruby was that everybody wanted to do meta-programming everywhere because it was the hot new thing and made Ruby like Lisp, and made you look smart and clever. The more a person could abuse meta-programming to twist the language, the better.

For example, here’s a Ruby tutorial explaining how to use method_missing to auto-generate methods that match a regular expression, and at the end: “we’ve created an extremely expressive and beautiful API that is DRY and easy to maintain.”


I believe this is the article (or part two of it): http://www.sitepoint.com/ruby-metaprogramming-part-ii/

Not only does Ruby do metaprogramming poorly, but Rubyists have managed to give it a bad name by too often metaprogramming for the sake of metaprogramming, like in that article.


Oops! Thank you, that's the one!


> Meta-programming is great to have, having access to a credit card is way better than not having one if you have the discipline to use it properly. It's essentially why I stick with Ruby rather than move to a statically-typed language, reflection and duck typing are tremendously useful when you don't really know what it is you're building yet but need to execute quickly. You can move at the speed of thought.

I used to have this view (though I used Python rather than Ruby). But since discovering Scala I've found I can move just as quickly, without having to sacrifice safety. When I really need extreme dynamism (which is rarely), typeclasses are a bit more cumbersome than just doing it in a dynamic language, but they can do everything I ever did in Python and while maintaining full safety.


You can't say this about macros without being "afraid" of them, though. I tend to agree but admittedly have not worked much in any language where macros are widely used. I feel like, if all they are is "just functions" then why don't we all just keep calling functions like we've been doing?


> I feel like, if all they are is "just functions" then why don't we all just keep calling functions like we've been doing?

The major problem with function calls in most languages (ie. not Haskell) is the call-by-value evaluation order.

For example, we could try writing an `unless` function (an "if" which only has an "else" branch):

    (defun unless (condition branch)
      (if condition
          nil
          branch))

    ;; Delete "foo.txt", unless we've been told to keep it
    (unless keep
            (delete-file "foo.txt"))
Clearly this won't work for call-by-value: the call to `delete-file` will be executed before our `if` gets a chance to decide anything.

There are a few ways around this:

- Call-by-name/need: This is what Haskell does, but it turns execution order inside-out. This re-orders the code's side-effects, which is why Haskell has to enforce the correct ordering using datatypes like IO.

- Ban such uses. Just use "if". If that's not good enough, propose a language extension and wait for it to appear in compilers.

- Use thunks. To delay something, wrap it in a function. To force it, call the function. This works, but splits our values into two worlds, causing a combinatorial explosion of decisions to be made (should condition be a thunk? Should we return a thunk?):

    (defun unless (condition branch)
      (if condition
          nil
          (branch)))

    ;; Works
    (unless keep
            (lambda () (delete-file "foo.txt")))
- Implement a macro system. This acts on the syntax, before it's been evaluated into a value. This is splitting "callable things" into two worlds, functions and macros, rather than all values. This actually isn't an issue most of the time; since functions' APIs vary so much already (arity, return values, types, etc.)

    (defmacro unless (condition branch)
      `(if ,condition
           nil
           ,branch))

    ;; Works
    (unless keep
            (delete-file "foo.txt"))
- Do everything with syntax rewriting! That's what languages like Pure and Maude do. In effect, everything's a macro. It takes discipline to ensure execution follows a straightforward path though (which usually boils down to "write everything as if they were functions")


The major problem with function calls in most languages (ie. not Haskell) is the call-by-value evaluation order.

Another problem that keeps us from replacing macros with functions is that function arguments can't be the binding position that introduces a new variable. To use an old classic example,

    (let ((x 3)
          (y 4))
      (+ x y))
could be translated to ((lambda (x y) (+ x y)) 3 4), but no function could introduce new names `x' and `y' into scope. Scope is a compile time notion, and a function happens at run time.


Macros have absolutely nothing to do with functions.


You say that you stay with Ruby because of metaprogramming. Yet I suspect that it is in statically typed languages where we'll see metaprogramming that is both powerful and principled (legible, understandable, won't eat your laundry). C++'s templates are powerful, though ugly; the C preprocessor seems to be regarded as an inferior form of metaprogramming, to be avoided in newer language designs. On the other hand, there are statically dependently typed languages that collapse the type and value level, letting you make things like type functions. There is also multi-stage programming -- why choose between evaluating code at runtime or compile time when you can choose to evaluate it at compile time, runtime, or any arbitrary stage in between? (Yeah, this one is pretty wonky and I don't know if I've fully gotten the utility yet. But it might actually make for a new kind of dimension of abstraction -- letting you build up multiple layers of interpretation, and inspecting the different layers when you want to investigate what they do. I guess. In any case the motivation is to be able to write high-level code that ultimately gets "compiled" down to efficient, specialized code.)

Or maybe dynamic Lisp languages will stay the kings of metaprogramming. But it's still in the same spirit -- macros, which are static metaprogramming constructs.


It's not an "antipattern", not in Lisp at least. It is a way to improve maintainability dramatically. Probably, the only sane way to do so.

But, of course, you have to do it the right way. eDSLs must be designed as chains of trivial, flat transforms, so you can easily understand each stage and inspect any state of transformation at any moment. Most of the Racket users, unfortunately, lack this discipline, and in general are practicing a suboptimal approach to metaprogramming.


This is a great article about Racket's advanced macro systems.

Greg has done great work, from Cakewalk days right up to the present. He is generous, kind, and a wonderful teacher. Many of us working with Racket are using his fantastic Emacs major mode for Racket:

https://github.com/greghendershott/racket-mode


Indeed, it's a great introduction to Lisp-style macros in general. The author goes to great lengths explaining every concept that might be new to people unfamiliar with Lisp (like the use of car and cdr functions and their variations cadr, caddr, and so on).

Side note: i really like how the syntax highlighted code includes links to the docs. You can quickly explore the definition and documentation of any function you see. It would be great if the code snippets were themselves editable and executable, enabling an even more exploratory approach to learning :)


> Side note: i really like how the syntax highlighted code includes links to the docs. You can quickly explore the definition and documentation of any function you see.

This is a built-in feature of Scribble (a documentation language created in Racket), so this is ingrained in the community. It's one of many things made possible by Racket's stellar macro facilities.


Sure would be nice to have either the complete thing in one file or the "Next" button at the end of each section instead of only at the beginning.



What CMS/platform does the website use? Anyone?


Found it. http://docs.racket-lang.org/scribble/

Edit: Anyone know how to use Scribble to render math, preferably with MathJax?





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: