Hacker News new | past | comments | ask | show | jobs | submit | Bayes7's comments login

Is there any good article/paper that describes how it actually works or is implemented not just in high-level and hand-waving terms?



Yeah Hacker Factor's multi-post critiques are where I first saw it analyzed. For reference they run the popular fotoforensics.com image analysis site.

They also have scathing critique (eg [1]) about the Adobe-led C2PA digital provenance signing, having themselves been part of various groups that seek solutions to the provenance problem.

[1] https://www.hackerfactor.com/blog/index.php?/archives/1013-C...


thanks!


There tends to be more information under the search term "perceptual hashing"


The secrecy of the inner tech is intentional.


great project!


Check out sioyek, it’s great and can open epubs like normal pdfs:

https://sioyek.info/ https://github.com/ahrm/sioyek


Sioyek uses the MuPDF engine, which supports EPUB: https://mupdf.com/


I stumbled over it but didn’t installed it because the website doesn’t mention the ebook functionality.

“Sioyek is a PDF viewer with a focus on technical books and research papers”


Nightmarish with a tablet.


This was a great read, thanks a lot! One a side note, any one has a good guess what tool/software they used to create the visualisations for matrix multiplications or memory outline?


excalidraw <3


Summarised by https://xkcd.com/2494


"[...] modern neural network (NN) architectures have complex designs with many components [...]"

I find the Transformer architecture actually very simple compared to previous models like LSTMs or other recurrent models. You could argue that their vision counterparts like ViT are conceptually maybe even simpler than ConvNets?

Also, can someone explain why they are so keen to remove the skip connections? At least when it comes to coding, nothing is simpler than adding a skip connection and computationally the effect should be marginal?


Skip connection increase the live range of one intermediate result across the whole part of the network skiped: the tensor at the beginning of a skip connection must be stored in memory for longer while unrelated computation happen: it increase the pressure on the memory hierarchy (either the L2, or scratchpad memory).

This is especially true for example for inference for vision transformers, where it decrease the batch size you can use before hitting the L2 capacity wall.


Okay, I see that for inference. But for training it shouldn't matter because I need to hold on to all my activations for my backwards pass anyways? But yeah, fair point!


also removing skip connections leads to a rougher loss landscape, hence it should be harder to find the optimal weights.


Yes there's very good theoretical reasons for skip connections. If your initial matrix M is noise centered at 0, then 1+M is a noisy identity operation, while 0+M is a noisy deletion... It's better to do nothing if you don't know what to do, and avoid destroying information.

I appreciate the sibling comment perspective that memory pressure is a problem, but that can be mediated by using fewer/longer skip connections across blocks of layers.


hey, off topic but can you explain or link a post which explains what the benefits of the alias -> function definition are over just defining the function directly? Thanks!


I am puzzled as well, why not define the function and call it wiki?


I'm just used to keeping all my aliases together for easy location with alias command and I like having ones with arguments with the others.

Aside from that, no benefit really that I can think of.

So yes, to be clear to anyone else you can just put:

  wiki(){ w3m -F "https://en.wikipedia.org/w/index.php?search=${1}&title=Special:Search&ns0=1"; }
in your .bashrc - and if you're me you just forget how you defined it and you have to cat it every once in a while :)

Oh, and you might be entertained by this silly alias adapted from an IOCC entry...

  $ cstdin
  printf("Hello World\n");
  Hello World

  alias cstdin='gcc -pedantic -ansi -Wall -o ~/.stdin ~/.stdin.c -lm && ~/.stdin'


  $ cat ~/.stdin.cc
  #include <stdio.h>
  <snip many headers>
  int main(int argc, char **argv)
  {
  #include </dev/tty>
  return 0;
  }
I'm sure it would also make way more sense as a dedicated script. I have a C++ one in there too.


ah I see, cool thanks!


Very cool! However, I often feel like the process of generating the question/answer necessary for the Anki card is an important part of the learning process because it forces you do deeply think about the material and reflect on it. So I think there is a trade-off involved


Absolutely you lose a lot of effectiveness when you automate card creation. But it's still better than thinking, "Oh, I should make a card about this so I don't forget this important thing..." and never doing it.

For me the biggest risk is being tempted to making way too many cards (because it's so fast and easy now!), end up with way too many reviews, and declaring Anki bankruptcy and uninstalling the app after a few months. I may done this more than once...


imo if you don't make the card then either: the word isn't important, or it comes up again, and you finally do make the card. It's not a big deal.

I'm about 10 years on my Anki deck and have reached the same conclusion as you about keeping deck size down. A single card has a huge time investment if you add up the reviews. Even more if it's a bad card and you fail it a lot. As the words you learn get more and more niche, it's important to weigh up whether a card is worth making or keeping. I actively delete cards that make me feel 'meh' when I see them, or that I fail a lot, so I don't lose motivation.


These days I have a policy to just suspend once the interval hits six months, so that deck size has a max cap and can eventually go to zero if adds stop for a long enough period. Long enough to bootstrap niche words and hopefully maintain them through reading.


Robert Sapolsky's Human Behavioural Biology [0] It's like a (good) netflix series you can binge watch!

[0] - https://www.youtube.com/watch?v=NNnIGh9g6fA&list=PL150326949...


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: