Hacker News new | past | comments | ask | show | jobs | submit | HardyLeung's comments login

The USAMO winner list is here:

http://www.maa.org/sites/default/files/USAMO_Winners_2016_1....

Apparently the final team was not chosen strictly from the top 12 (Michael Kural and Ashwin Sah were Honorable Mentions).


There are two more tests used for team selection after the USAMO (during the training camp).


Could be that they couldn't attend or some other reason.


Lee Sedol winning, and keeping his cool and not make any mistakes. AlphaGo, on the other hand, went bonker especially towards the end but it got into bad territory not because of silly mistakes but brilliant play by Lee Sedol.

Could it possibly be that both of the mistakes were bugs? Perhaps it suggested a non-sensical position such as (25.23, 13.15), and it was snapped to (19, 13) :D


I dont think they were bugs in a traditional sense. I think AlphaGo picked moves to try and maximize the probability of winning, and at some point that was only by the opponent making a suboptimal response. I remember reading somewhere most of it's training data is from amateur games. The model doesn't have a prior that AlphaGo is playing a professional who won't make a bad response. It probably would have resigned a lot earlier with that prior :)

Another thing to keep in mind is that AlphaGo has no "memory", so every turn it looks at the board fresh. This means if the probabilities are very close you could have it jump around a bit either due to numerical noise from floating point calculations, model errors, or just tiny differences in probability making the behavior appear erratic and quick to change "strategy".


> Perhaps it suggested a non-sensical position such as (25.23, 13.15), and it was snapped to (19, 13) :D

Alphago doesn't work like that..


This is interesting. I wrote Tagxedo http://www.tagxedo.com which is another kind of constrained packing problem and there is a bit of similarity in the approaches. This is the first time I've heard of CNC packing and part nesting... May I ask what kind of software and how much they are selling for?


The one I compared with is mynesting which has a pay-per-use model. They also make nestfab, which is 2k+ per seat I believe.

There are lots of others at various price point, but none that are free it seems.


Nice. One improvement would be to throw in letters in frequencies proportional to the likelihood of appearance in actual text. For example:

http://www.math.cornell.edu/~mec/2003-2004/cryptography/subs...

Though this is a simple problem, if you are picky there are actually other considerations... how well do the words intersect (do you prefer more of a sparse tree-like structure, or maximal mesh-like intersection). What is the optimal dimension of a puzzle given the input. How the order of words (random vs. longest word first) affect the quality of the puzzle. etc. I have a half-done word search puzzle app (with a twist) so this piques my interest.

But I like this one nice and clean.


I'd prefer the challenge of seeing how many sub-strings from the actual words you could embed in the word-search, without having duplicates of the full strings.

Bit of misdirection for a harder challenge.


I tried out the zooming capability (nice) but it seems like your stroke width does not change in response to the zoom level (at least not in lock step)? So the redness in the lady's dress changes as you zoom. Isn't that a problem?


True. I'm on the fence here.

Keeping "real" thickness slows down rendering by a factor of 1.5-2 depending on browser used. Images look better though. I'm not sure what users want at this stage of the project. Perhaps a toggle?


It seems to be the case.


Another feature request... if a tile is already illegal (too many committed blue in line of sight, or no way to commit more blue to satisfy requirement), flag it already. I think the biggest pain is the counting. Otherwise, a nice game!


That's what the hint button is for.

If you start adding any automated processing like that, then you're only a couple more processes away from the game just solving itself without you.

The elegance of this implementation is that you are doing all the solving. Unless you specifically ask for help from the hint button.


It is a balance between playability and automation. Even minesweeper does that (auto deduce that a large region can be opened up). What I described is only the first-order "convenience" checking to avoid the pain of going way back because of a miscounting.

Regarding the hint button. I don't want to use it precisely because I don't want to "win" by repeatedly getting hints.


I really like the realtime feedback idea, similarly you could always show how many additional dots are needed (e.g. the starting counts are correct and count down as you add visible blue dots in it's line of sight), 0 is blank, negatives could be hilighted as issues. Then you don't have to count existing circles, and errors are clear in real time.


I see that this is very similar to Paper by 53 in several aspects... Is that what you are trying to compete with? In that case, I wonder the logic of an upfront price instead of Paper's strategy which is to give you something to try out... At $4.99 a pop, it is much harder for someone to take the plunge.


I do take your point. In fact I'm in the process of writing a small update with In App Purchases. With this update, it'll be free.


Neat. I have a related project called Polygonian -- http://www.polygonian.com -- runs on browser and requires Chrome Native Client that does something similar. Here's the output of a Lincoln portrait using similar point count:

http://imgur.com/lpKVILV


seems like it's using gradients? for some reason looks less.. triagular that way


Yes, linear gradient within each triangle. It has a non-gradient mode but not turned on. The difference is minor though: Gradient = more "silky" and less triangular. No gradient = a bit more flat.


This is indeed an interesting problem. I don't know what the runtime breakdown is, most likely in the sorting routine, but if that's the case the problem can be solved much faster but not resorting to generic stable_sort().

I summarize the author's algorithm as follows: take an array of R^3 vectors, and for each element in the vector, perform the [+,-,-], [-,+,-], and [-,-,+] transform. So N numbers become 3N numbers, which is followed by a duplicate removal process faciliated by the sorting.

The improvement is that the duplicate removal can be accomplished without sorting.

Let's say you have an array of R^3 already sorted. Then say you create 3 arrays each of which is created by "shifting" the array in each of the 3 directions. Call the original A, and the derived B0, B1, B2. Note that B0, B1, and B2 are each sorted since element within each array is adjusted in the same direction. Then all you need to do is to do a 3-way merge, which takes linear time.

I bet by doing this, the runtime could be significantly reduced. My guess is that it runs in the range of 10-30 seconds (instead of 335 seconds).


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: