I find that the fact that the functions min and max have the same name as the variables min and max increases cognitive load which makes it harder to think about it.
Next challenge: teach the optimizer to make that almost as fast as the min/max way ;-)
(You can’t reduce it to the min/max call because it also works if you accidentally pass a lower bound that’s larger than the upper bound. Worst-case, the above takes 3 comparisons, unless at least two of the inputs are constants)
I'm glad things like this are being worked on. I have been writing a set of implementations of the nBody benchmark in JavaScript using various forms of abstractions. In an ideal world they should all take the same speed since they perform the same fundamental task and produce the same result. They just represent different scales of optimization effort.
It's interesting seeing the difference between vectors in arrays vs objects and if you do immutable versions. The trickiest to optimize form is using a micro vector library which uses closures and array map().
var vop = op => ((a, b) => (a.map((v, i) => op(v, b[i]))));
var vdiff = vop((a, b) => a - b);
var vequals = (a, b) => { return (vdiff(a, b).reduce((c, d) => c + Math.abs(d), 0)) === 0;};
var vadd = vop((a, b) => a + b);
var vdot = (a, b) => a.reduce((ac, av, i) => ac += av * b[i], 0);
var vlength = a => Math.sqrt(vdot(a, a));
var vscale = (a, b) => a.map(v => v * b);
var vdistance = (a, b) => vlength(vdiff(a, b));
Currently on my lowly atom laptop and FireFox. The version using mutable objects is ten times faster than the mapping immutable array version. I live in hope that one day there will be an optimizer that turns
It could be trivial to implement an optimisation which does this for that exact code. But what are you going to do? Hand-code an optimisation for every similar thing people could write? I implemented a general solution.
So it also works through metaprogramming:
[1, 2, 3].send(:sort).send(:[], 1)
Through user-defined sorting order:
[1, 2, 3].sort_by { |a, b| b <=> a }[1]
When nested:
[[1, 2].sort[1], 3].sort[0]
And so on.
Note that it also needs to be transparent to debuggers and profilers, it needs to handle multiple method redefinitions (for example what happens if someone redefines the sorting order for integers).
It's not a pattern-matching optimization - it's partial evaluation enabled by a new kind of polymorphic inline cache.
> But what are you going to do? Hand-code an optimisation for every similar thing people could write?
While I appreciate that you solved the general problem, I wonder if there are legs here. Specifically, could one mine GitHub to find automatically common patterns that one could write specific optimizers for, or at minimum, leverage that to learn what semi-general cases are worth optimizing? To my knowledge optimizing compilers already do have effectively handlers for common operations, but I don’t know if anyone has leveraged “big data” to help guide this.
IIRC, various tweaks and optimizations in Java were guided by Sun analyzing their own code based. GitHub is just so much bigger, and polyglot.
You know what's great about that? The order of the arguments doesn't matter. So all the debate about "should it be num, min, max or min, num, max"- your solution does not care. Put them in any order you like!
You've redefined the problem from clamping a given value into picking the middle value from 3. This is a lovely way to re-interpret it.
Well, the order of the arguments does matter. Sorting three values requires 2.67 comparisons where clamping a value between two other values requires exactly 2. There are plenty of contexts where cleverly avoiding a problem by doing 33% more work isn't viewed as desirable.
True, but I think there are more situations where minimizing the chance of programmer error, now or in the future, is more important than a micro-optimization that will never matter. All depends on context.
Does NaN have an "order" in the set of reals or integers or whatever? I would have no idea what to expect from `min(NaN, x)` or max same. But is it specified by an IEEE standard or something?
Both min and max should return NaN, if any of their parameters is NaN. Sorting can be defined where to place the NaNs (head/tail) but it's largely irrelevant in this case as simply the substitution won't be permitted by any compiler.
NaN is part of IEEE754 but of course it's not a 'real' number (integer numbers don't have NaNs)
Edit: you can consider NaN (and to a degree both infinities) as an exception, once it occurs - it has to be propagated. Any operation involving NaN should be returning NaN, any operation comparing NaN to anything has to return 'false'. That includes "if (NaN == NaN)". boolean isNaN(double d) is effectively "return d != d;"
The reference to IEEE 754 is made later on, mostly to answer the question posted. I meant regular functions in C alike languages - Math.min/max - java/javascript, fmin/fmax - C++. They do the "right" thing to propagate the NaN
No, it breaks the ordering requirements. NaN compares greater than and less than every number.
You can say that a call to max should return NaN if any argument is NaN, but you can't say the same about sorting. (For one thing... sorting an array doesn't return a scalar value.) Sorting is done with comparisons, and what happens if a NaN gets into the list of values will depend on which specific comparisons happen to be done.
The only reason I know what to expect is because Suckerpinch on YouTube made a video in which he managed to define a logic system using NaN and +∞, and does so by abusing min and max, among other expressions:
Yes, but in that context, ∞ is a number. We often interpret "NaN" to mean "infinity," but it only means "not a number." Maybe I'm being pedantic, but if we want a token representing infinity as a number, it ought not be called "not a number."
IEEE754 has both infinity and NaN. They are different. NaN is always the result of an invalid operation, such as trying to take the square root of a negative number. Infinity is for when the result would be valid, but is too large in magnitude to represent. There is both positive and negative infinity.
The functions
minNum and maxNum ([IEEE 754-2008, 5.3.1, p19]) take two
arguments and return the min and max, respectively. They
have the special, distinguished property that “if exactly one
argument is NaN, they return the other. If both are NaN they
return NaN.”
Edit: As of 2019, the formerly required minNum, maxNum, minNumMag, and maxNumMag in IEEE 754-2008 are now deleted due to their non-associativity. [https://en.wikipedia.org/wiki/IEEE_754#2019]
Having a NaN at that point feels like a bug anyway, the solution is probably to check the arguments and throw an exception if NaN is provided (or use an input type that doesn't allow invalid values)
That would depend on what you do with the NaNs. For instance I have been using them extensively in time series data representation to denote a specific entry has no value - think of Saturday and stock/forex markets.
Null does not pertain to primitive types in java and in C - it'd be zero when applied to a 'double'. (note: you want double[] as backing storage in java and you absolutely do not indirections). Aside that I have quite a good idea how NaN is represented internally and what it does, e.g. you can have several different NaNs that have different representation bitwise. Baring that - NaNs are pretty decent to represent lack of value as all operations with them result into a NaN. In the end NaN is just a composition of bits that the hardware can optimize for.
I think the reason it's weird is that we might intuitively think of the "enforce a lower bound" function as taking two named arguments (lowerBound and inputValue) and the order of those two arguments mattering.
But of course, it turns out that the order of the arguments doesn't matter: applying a lowerBound of 5 to an inputValue of 100 turns out to be the exact same thing as applying a lowerBound of 100 to an inputValue of 5.
We know that the order of arguments doesn't matter for the Math.max function, so I think that's where the moment of incredulity comes from.
I think your "at most" language is pretty expressive. You could do that as an alias for `min` and `max`
I think `at_most(at_least(num, lower_bound), upper_bound)` is much easier to understand instantly than `min(max(...))`.
I'm tempted to make these aliases myself in some of my development actually. I find a pretty big conceptual difference between "I want to find the minimum point in this data", and "I want to restrict the range of this number" that giving them different names will probably help the readability of my code.
(Of course, for `min(max(...))` I usually write a `clamp()` function to hide that for me, but someones I want to only clamp in one direction)
> (Of course, for `min(max(...))` I usually write a `clamp()` function to hide that for me, but someones I want to only clamp in one direction)
You could make clamp work in only one direction too. clamp(number, None, upper_bound) or the idiomatic equivalent in your language of choice seems pretty readable.
There was a young coder whose hacks
His manager often claimed lacked
The requisite clarity
For to clamp vars would he:
Math.min(Math.max(number, min), max);
@jaffathecake had a problem of truncation
and posted to Twitter his calculation.
The gist of his attack
was min( max( num, min ), max)
yet refused to add any annotation.
I use ceil/floor (and make people use whenever I can) if something is going to happen when something hits the ceiling or drops to the floor. And avoid if it is just for clamping.
func helper(a float64, c chan float64){
time.Sleep(time.Duration(a) * time.Second)
c <- a
}
func clamp(a float64, min float64, max float64) float64 {
c := make(chan float64, 3)
go helper(a, c)
go helper(min, c)
go helper(max, c)
_, out, _ := <-c, <-c, <-c
return out
}
It gets a bit confusing when the order of arguments is different depending on the library. For instance, with std it's std::clamp(val, min, max), but with Qt it's qBound(min, val, max) (for some reason I think the order of arguments in qBound is more logical).
In Haskell, functions often take their arguments in the order that makes the most sense to partially apply. In this case, that would probably be clamp(min, max, val): supplying the first two arguments results in a reusable clamping function.
The expression Math.min(Math.max(num, min), max) is symmetric in min and num, so it doesn’t matter whether you interchange min and num (or, for that matter, max and num, but that is harder to see from that way to define the ‘clamp’ function)
It depends. (val, min, max) operates on a first argument, which is more logical as well. (min, max, val) allows range constants to be more visible if <val> is a lenghty expression. In more powerful languages like objective-c this has less sense, as you can always specify all arguments explicitly:
Which returns NSIntegerMax and sets the error variable if the range appears to be empty. The chance that max is NSIntegerMax is low, but if your data allows that, you can always put an additional shortcut before clamping.
if (min > max) {
error = [NSError errorWithDescription:@"min > max occured"];
return NO; // or equivalent
} else {
x = ...
}
// use x
> Modern C# has Math.Clamp() since .NET Core 2.0; too bad it’s not available in desktop edition of the runtime.
Huh, that’s good to know. It’s also in .net standard 2.1. A shame it wasn’t added to Framework 4.8 (which I guess is what you mean by "desktop edition of the runtime"?)
C++17 does have it... but it doesn't compile optimally. It compiles to something based on comparisons instead of the floating-point-specific operations. I tried this on a number of compiler combinations and didn't see anything that would emit min/max instructions for `std::clamp<double>`.
Depends on how the comparisons are ordered. Some of the orderings I've seen in here do respect NaN by virtue of `x > upper_bound` comparing false if either x or upper_bound are NaN.
MS .NET framework. Unfortunately, for the last couple years it lags behind .NET core. Even 2 years old .NET core 2.1 is better in some regards than the latest desktop version 4.8.
.NET Framework is now in maintenance mode because they're transitioning to .NET Core (soon called as .NET 5). I would call .NET Framework as Legacy edition rather than Desktop edition
I’ve been programming for living since 2000, but I don’t think that’s relevant. No reason not to use what’s available in standard libraries of whatever language you’re writing.
For example, C++ on AMD64 is very likely to compile std::clamp<double> into 2 instructions, minsd and maxsd. I’m not so sure about nested ternaries mentioned elsewhere in the comments.
I’m aware some parts of C++ standard library are outright horrible, like iostream and I/O in general. Other parts are questionable, like date & time, locales, and futures.
Meanwhile, other parts of the same standard library are actually OK (most collections, threading, synchronization, atomics, smart pointers, initializer lists). And other parts are awesome, like most of the stuff from <algorithm> header.
Apparently, one of the C++ design goals was to not pay for features which aren’t used. Selectively ignoring stuff from the standard library doesn’t have much downsides.
Not the person you are replying to, but "a lot of the standard library is horrifying and should never be touched" is pretty standard advice for C++. Mind, I don't think this particular function belongs to that set.
Many performance-critical C++ programmers treat std with suspicion. One thing to keep in mind is that the interface is standard, but the implementations are not, and can foil you on cross-platform development. Another is that you might not need everything that a std container provides, and you can get away with a streamlined data structure that doesn't support those unnecessary operations.
But as a sibling commenter notes... this isn't relevant for min/max.
I'd much rather work on an application that utilized the standard lib than one that brought in dependencies and custom data structures.
If it really is a bottleneck on a hot path then go for it. But not using it because of some ancient anecdotes is going to lead to an unmaintainable mess.
I'd say the only way this would be better than min/max solution is that you can't accidentally flip it when writing the code - but IMO both suck at expressing intent, clamp reads unambiguously.
Result from sort: 3 in 0.9785124980007822s
Allocated 3000001 object(s)
Result from ternary: 3 in 0.3205206830025418s
Allocated 1 object(s)
Result from clamp: 3 in 0.5030354310001712s
Allocated 2 object(s)
Interestingly the ternary comparison is faster than clamp.
I find myself needing this most frequently in making graphics in R. The scales package has squish() with the same behavior:
squish(25, c(5, 10))
=> 10
squish(6, c(5, 10))
=> 6
squish(1, c(5, 10))
=> 5
If you don't provide the limits it defaults to c(0, 1). That's because this function exists to map to a 0-to-1 range for functions that then map the [0, 1] range to a color ramp.
True, nested ternaries can be hard to follow. And the excessive parentheses promote this way of looking at it.
OTOH I think chained ternaries can be simple and easy to understand.
Yes, they are the exact same thing in this case, but getting rid of those nested parens really helps, at least for me.
sarah180's example is a good illustration. I would change the order of the tests because it makes more sense to me to check the min before the max. I'd also make one minor formatting change, because I code in a proportional font and can't line things up in columns:
a < min ? min :
a > max ? max :
a
Maybe people think differently, but to me that is super easy to understand, and much better than the confusing Math.min/max stuff.
I would also wrap the whole thing inside a function:
function clamp( value, min, max ) {
return(
value < min ? min :
value > max ? max :
value
);
}
Now that it's inside a function, you could change the code to use if statements, or Math.min/max, or whatever suits your preferences.
I don't find this a whole lot easier to read to be honest. It seems like doing minification manually, when we have tools to do that for us. An if statement seems a lot clearer, and minifies well https://twitter.com/jaffathecake/status/1296423819238944768
Note that with modern branch predictors such optimizations may not actually be beneficial–ARM got rid of condition codes on all its instructions in the 64-bit version. (Plus, I assume they ran out of space to encode them.)
x86 has instructions that execute conditionally. Most conditionals in higher level languages get compiled to conditional jumps, but using conditional operations this isn't necessary. The same code path is taken in all cases, and the instructions effect is what is conditional.
In the case of cmov, it is either a nop or a mov dependent on the state of the conditional flags. Using this construct instead of a regular mov guarded by conditional jumps has better performance in some cases.
On my machine gcc is outputting a combination of conditional jumps and conditional movs at all optimization levels
This is 3 branches. The Math method ends up with 4.
This is one of those cases where I think it is much more readable to just write the code than to puzzle over what Math.min(Math.max(min, num), max); might be doing.
if (num < min)
return min;
if (num > max)
return max;
return num;
That's how I'd write it. May not be super terse, but anyone that stumbles on this will know precisely what's happening without needing to take a few seconds to puzzle things out.
What you want is something that compiles to conditional moves. A good compiler should compile your proposed version to the same machine code as the max & min method. If for one reason or another it compiles to branches, it’ll end up being slower.
In Javascript in my browser (Safari) when these methods get called enough to get compiled using the most aggressive stage of the JIT, they end up essentially the same speed. On my laptop either one runs about 145 million times per second on one CPU core.
I wouldn’t be surprised if a C compiler ended up making this function significantly faster, but I haven’t tested it.
To me the danger of this kind of one liners is that if you use it 2 times somewhere, someone, at some point, will do the lazy refactor of “let’s put it into a function”.
int clamp (int value, int min, int max)
And that one liner inside. And now you have a double evaluation bomb waiting to go out on your codebase.
Maybe they mean that you might have been unintentionally relying on double evaluation, and then you take it away, and something breaks. Because it worked for the wrong reasons.
This is what I love about kotlin. The standard library has a good coverage of these kinds of problems and it is done in a simple an clean way. I had the same feeling with Python, the language got you cover with mundane tasks and you don't have to spend time researching libs that do it for you (or spend your time doing a clamp implementation + tests).
I haven't heard of any instances of Kotlin itself optimizing these things away, but the JVM may be able to do so during its various JIT passes. It's definitely not something you can necessarily rely on, though.
Luckily, these convenience methods are usually implemented as inline extension functions, so the whole thing will get inlined into the calling method, making JIT optimization more likely.
Wait, really? I just had to double-check my one JS project for bugs and either I used to know this gotcha or I got lucky. My sort()-ing needed to handle NaNs carefully so I was already using custom comparators.
I was curious how slow this would be, and here is what JavaScriptCore made of this code:
function clamp(n, min, max) {
return [min, n, max].sort((a, b) => a - b)[1];
}
for (var i = 0; i < 10000000; i++) {
clamp(Math.random() | 0, Math.random() | 0, Math.random() | 0);
}
However, I was pretty disappointed when it seemed to be calling sort each time :( Perhaps I profiled it incorrectly? jsc's profiling data shows that it never hit FTL and nothing ever got inlined. The bytecode for DFG and Baseline is identical:
He's saying that, if you have explicitly defined a max and min such that max < min, it is not graceful for the computer to produce a result as though those values were swapped. In other words, garbage in should produce garbage out.
The array implementation sidesteps this by not semantically defining a max and min, instead sorting three arbitrary numbers.
In practice “max” and “min” often aren’t conceptually important for a clamp function. It’s more that you want to keep a value within a range, which is defined by two end points in arbitrary order.
If that is the version of clamp you need, the sort based solution reveals something profound and unexpected: it’s not just the two end points of the range that are equivalent, but all three numbers. Keeping value A between B and C is the same as keeping B between A and C or C between A and B. It’s completely arbitrary which pair you consider to be a range.
I'm not sure what you mean. The behavior of the max() and min() functions is perfectly well defined. The terms "maximum" and "minimum" are well defined. If I were using those terms I would likely consider the case where max < min to be an error, or have some other meaning, like an empty range.
If I wanted it to automatically flip the values to ensure a sensible range is defined, I would probably use "a" and "b" or "endpoint1" and "endpoint2" or something, because "max" has now become "max or min," which is not the same.
It's cute but if you used this in an innerloop (like a game, simulation or graphics code where 'clamp' is used often), it'll generate a ton of garbage as well as potential slowdown for no good reason.
This should be completely obvious, but largely irrelevant to the topic at hand. The linked tweet talks about how difficult it is to remember the order of the terms when you implement clamp a certain way; I just wanted to point out that with a different solution, the order surprisingly doesn’t matter at all.
I think the main hurdle is that we need to read this inside-out, which is something we face regularly trying to read nested function calls. I seriously believe pipe-like operators solve this problem cleanly.
In Elixir which does have the pipe `|>` this could be.
number = number |> max(lower_bound) |> min(upper_bound)
It is simpler if you use the notation [x]_a^b (i.e. with a subscript and a superscript b) to mean x, clipped to the range a to b, and skip writing +/- infinity if you don't intend clipping on one side.
Then you get a bunch of obvious identities like [x]^b = min(x, b) = [b]^x (x capped by b is the same as the smaller of x and b which is the same as b capped by x), [x]_a^b = [b]_a^x, and [x]_a^b = [[x]_a]^b. Putting these together you get [x]_a^b = [[x]_a]^b = min(max(x, a), b). But honestly it's just easier to stick to the notation most of the time.
Ah, I see - for clarity I'd rename them FASTEST_POLL_RATE -> SHORTEST_POLL_PERIOD or store them in Hz rather than seconds, so everything was 1/ in that little snippet. Thanks for clearing up my confusion :)
For casual readers,
`dip` pops the top of the stack, executes a quotation, then pushes the top of the stack back on.
So this pops the max value off the stack, applies the quoted `max` word to the x and min stack values, then pops the max value back on the stack and applies the `min` word to the result and the max.
Speaking only to JS is there any reason to write it any other way outside of being clever or as a lambda for singular use? I definitely prefer this version. (Assuming any necessary runtime checks are included for a given project)
There are many reasons to forego readability, especially when writing a library: performance, compatibility, requirements, interpreter/compiler optimizations or even cyclomatic complexity.
In lodash's case it might even be all of the above, although I can't speak for the intentions of the authors since there are no comments to guide readers through the process.
Note GP's link points to what looks like the v3 branch. Check out the latest implementation of clamp, with a few less if statements, and what looks like a NaN check using strict equality if you want your mind blown. https://github.com/lodash/lodash/blob/86a852fe763935bb64c125...
public fun Int.coerceIn(minimumValue: Int, maximumValue: Int): Int {
if (minimumValue > maximumValue) throw IllegalArgumentException("Cannot coerce value to an empty range: maximum $maximumValue is less than minimum $minimumValue.")
if (this < minimumValue) return minimumValue
if (this > maximumValue) return maximumValue
return this
}
An amusing anecdote ruined by the fact that everyone who can relate immediately starts thinking of a better way to rewrite it. Yes brain that might be better but that's not the joke.
Fun seeing that pretty much everyone else finds that idiom confusing too. Half-serious, over breakfast:
(case [(> n min) (< n max)]
[true true] n
[true false] max
[false true] min)
(Side note: Clojure's `>` and `<` are kind of unreadable to begin with. Turning `if (> n min)` into "if n is greater than min" takes some work for me, still, after more than a year.)
Do people actually have trouble with this repeatedly, or just when they first learn about it? I started using this implementation of clamp a few years ago, and while it gave me some trouble when I first implemented it, the pattern is very simple, and I got used to it very quickly.
I use Common Lisp:
> (max min (min max n))
Is the difference my choice of language, my personal mental hardware, the amount of familiarity one has with the pattern, or none of the above?
On the one hand, I've done this enough times that I can usually write it on auto-pilot without thinking about it. But on the other hand, whenever I make a mistake, the resulting bug is really hard to track down, because it's tough to just stare at the line and recognize that something wrong.
I find the following easier to visualize. If L is the lower bound and U is the upper bound, and if you visualize L, U as two points on the real number line, then:
left_of_U ( right_of_L (num)) = right_of_L ( left_of_U (num)) = clamped version of num between L and U.
Here, left_of_U = Math.min(num, U) and right_of_L = Math.max(num, L).
See a stream of random numbers flowing from right to left. See the higher ones being pushed down ⌊ to the upper bound, and the lower ones being pushed up ⌈ to the lower bound, and the middle ones flowing through both guards unchanged.
I find the following much easier to read. It's certainly easier to maintain, deal with, for successive programmers, and likely has exactly the same compiled/interpreted operations for most common languages. Optionally replace t1 and t2 by overwriting num if needed.
Use full if else if you must.
If you're worried about speed, using built in min and max is not a good idea. There are many, many tricks to remove branches for certain datatypes, etc., as you need.
var t1 = num < min ? min : num; // clamp to min
var t2 = max < t1 ? max : t1; // clamp to max
return t2; // num clamped to [min,max]
Slightly offtopic, but very practical use of a comparable algorithm is to determine to what extent a date ranges falls between a first and last date (e.g. first date of the year, last date of year).
e.g. in Excel it would look like: MAX((MIN(end date range; last date) - MAX(Start date range;first date) + 1);0)/(last date - first date + 1). This results in 0 in case of no match, a fraction when only a part of the date range falls between the first and last date, and 1 in case it matches completely or even exceeds the first and last date.
I always used to get confused by the function names “min” and “max” because they return the minimum and maximum, but typically when you use them you are thinking in terms of applying a minimum or maximum bound. Having a dedicated clamp function significantly helps although imo there is not a single correct order for the parameters to go in.
This is the TXR Lisp interactive listener of TXR 242.
Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
1> (clamp 1 10 -1)
1
2> (clamp 1 10 15)
10
3> (clamp 1 10 5)
5
Added on August 13, 2015 by commit f2e197dcd31d737bf23816107343f67e2bf6dd8e
An Elixir convention I've seen is to put the thing you're operating on first, so that you can compose functions using the `|>` operator, which places the previous expression as the first argument of the function to the right.
Maybe something like this?
defmodule Compare do
def clamp(number, minimum, maximum) do
number
|> max(minimum)
|> min(maximum)
end
end
import Compare
clamp(5, 1, 10) # 5
clamp(1, 5, 10) # 5
clamp(10, 1, 5) # 5
some_number
|> clamp(min, max)
As a side note, I think the Elixir |> operator is a stroke of genius that other languages should take a look at. Making the pipe operator append the _first_ argument has the following benefits
1.) It makes the most "important" argument of the function the first thing you read in function signatures
2.) If you need to add more arguments to a function signature later, they tend to be less important the original args, so they tend to make sense at the end
3.) It creates a convention for all libraries to follow so they can leverage the pipe operator. Its really jarring when the thing you want to put in a pipeline isn't the first argument (looking at you `Regex`[0] which puts the regular expression as the first arg and not the string)
> It makes the most "important" argument of the function the first thing you read in function signatures
Doesn't that make writing functions that can use partial application harder? e.g. If I was writing clamp i would want the signature to be
(defn clamp [min max n] ,,,)
Then I can do:
(map (partial clamp 1 11) [-14 2 5 8 11 15 18])
I know when I use Clojures threading macros I use thread last way more than any of the others. My next most common would be piping it into arbitrary locations, e.g.:
; pipe into an arbitrary spot (specified here as o)
(as-> (range 1 10) o
(map inc o)
(filter even? o)
(reduce + o))
defmodule Math do
def clamp(num, _min, max) when num > max, do: max
def clamp(num, min, _max) when num < min, do: min
def clamp(num, _min, _max), do: num
end
Following xxs example of looking at NaN behavior, with this code, a bound (say lower) of NaN means that bound is disabled. Which may or may not be what you want.
no idea what's going on there, I know some of those variables are actually functions but the whole thing is unreadable unless you have experience in haskell imo
Did a bunch of coding bootcamps just begin session or something? I don't understand the comments here treating it like it's so hard to write and verify that it needs a completely different, slower, less direct, cutesy implementation.
That still leaves the order of Math.min and Math.max undecided and will probably not help much if you get easily confused by the visuals of this code.
I never thought about it but of course there must be code tongue twisters (or more correctly, brain twisters).
Thinking about it, I would probably go with a less confusing implementation. Terse code is hard to read and the compiler is likely clever enough to choose the best implementation anyway.
> and the compiler is likely clever enough to choose the best implementation anyway.
Not in my experience. The compiler is likely to be able to do something decent to it, but it'll probably be different.
Consider the following snippets. For unsigned integers they're all equivalent (and return the max of x and m), but gcc with a wide range of flags can't recognize them as identical.
(x<m) * m + (x>=m) * x
(x<m) * (m-x) + x
x<m ? m : x
I do wonder why it then isn't able to do that for the second line. Possibly because the CPU might have different flags set after processing the substraction?
But I'm not sure if your example is a good argument after I said I prefer less confusing code and you present an example which even confuses the compiler. ;-)
Your method name and the signature are misleading. First, it implies that it returns a boolean, not a number. Second, the argument names look like they have to be already sorted, which defeats the purpose.
It's common in computer graphics and user interface programming. You want to move your character around a 2D grid, but you don't want either of its x and y coordinates to ever move outside the bounds of the 2D grid. Or you want a web page with an article section that is 50% of the browser width, but never narrower than 300px and never wider than 800px.
I find the following easier to read :