I don't really know what you mean by "optimization-defeating" here or in your other comment; unless the language offers some heavy duty compile-time partial-evaluation, there are fundamental costs to certain indirections. It's not just the result of the optimizer deciding not to inline or whatever, a dynamic jump costs more than a static jump.
I’m also not sure about the `benchmark{ ... }` syntax, but:
m := instance.Method; benchmark{ m() }
In the general case, this will require allocating a closure and making a dynamic call.
m := type.Method; benchmark{ m(instance) }
This will require making a dynamic call.
benchmark{ instance.Method() }
This is a normal static call. It will be fast, even if not inlined. (Inlining today is often more critical as a supporting optimization for devirtualization than avoiding the minimal cost of a static call.)
---
reflect.ValueOf(it).Field(0).String()
`String()` is a bad case for a comparison here because it works and incurs a higher cost even if the field isn't a string.
60x sounds bad, but I really cannot stress enough: As soon as you're not doing a simple fixed-offset memory fetch from a base pointer, you might be paying 10x anyway no matter how good your optimizer is. That's just the cost of unpredictable memory access, even before we start talking about downstream impact on the code generator.
If you only want to support direct fields and not really the equivalent set of named fields you could type in a Go program, you can do some tricks with unsafe.Offsetof which will be faster, and probably cachable. But it's also called unsafe for a reason.
`benchmark {...}` is just some shorthand because the actual code is boilerplate-filled. It's not valid Go.
Optimization-defeating: go infers when to move things the heap vs keep it on the stack, and does somewhat aggressive in-function optimizations and inlining that it does not do cross-function. One common, very simple, and reasonably effective way to prevent some of that is to erase type information by passing a value through an interface{}. Even if you immediately unpack it and reuse the reference, Go has no jit, so it gets the job done alright. There are some other things that don't survive cross-package analysis, last I looked, so using multiple packages can also help make your benchmarks more realistic. It takes a lot more effort to be truly realistic in even one benchmark, much less the 9 various flavors that I tried.
And the String() piece is because I had it return a string because why not. Variable-sized data is extremely common so it's realistic enough, and using the correctly-typed reflection funcs usually avoids some reflection and boxing and allocation costs that I didn't feel like trying to address in more detail.
None of which was written out explicitly because there are tons of caveats regardless of the care I tried to take, so I just mentioned that they existed and moved on because I couldn't spend more time at that time.
I’m also not sure about the `benchmark{ ... }` syntax, but:
In the general case, this will require allocating a closure and making a dynamic call. This will require making a dynamic call. This is a normal static call. It will be fast, even if not inlined. (Inlining today is often more critical as a supporting optimization for devirtualization than avoiding the minimal cost of a static call.)---
`String()` is a bad case for a comparison here because it works and incurs a higher cost even if the field isn't a string.60x sounds bad, but I really cannot stress enough: As soon as you're not doing a simple fixed-offset memory fetch from a base pointer, you might be paying 10x anyway no matter how good your optimizer is. That's just the cost of unpredictable memory access, even before we start talking about downstream impact on the code generator.
If you only want to support direct fields and not really the equivalent set of named fields you could type in a Go program, you can do some tricks with unsafe.Offsetof which will be faster, and probably cachable. But it's also called unsafe for a reason.