Singularity (now called Apptainer - https://apptainer.org/) is a container system generally used in HPC environments that has some nice features, like having your container as single .sif file, automatically mounting your home directory, using the same user inside and outside the container, and the ability to import from the docker registry.
The approach is less about isolation and more about “packaging up” an entire environment in a easy-to-use way.
Useful to explain non-determinism to students. I saw a similar idea before at https://github.com/aeporreca/nondeterminism which uses fork() to (inefficiently) explore all possible guesses concurrently
It doesn't help that CS students will potentially end up hearing the term "nondeterminism" to mean different things in different contexts. In the context of the linked repo, it's used in the way they'll probably encounter it when learning about Turing machines, but in less formal contexts it also gets used a lot to describe stuff like "Heisenbugs" where running something more than once doesn't necessarily end up in the same result
For me, the idea of nondeterminism was around a computation tree who's branches correspond to possible states of computation.
In the context of a deterministic vs nondeterministic finite automata, a DFA has a linear computation path from the root to an accept/reject state and a NFA branches on epsilon transitions, continuing each potential computation (the "guesses") in so called "parallel universes" until one branch hits an accept state.
This definition follows with Introduction to the Theory of Computation 3rd edition by Sipser (would recommend)
I wanted to boot my home truenas box off usb flash, but I also know how flakey usb flash can be, so I use a pair of drives in a zfs mirror.
I would not recommend this setup, Any real drive would be better. I just hated wasting drive bays for the non-critical boot/system partition. In my defense despite burning through many cheap usb drives I have not lost the boot yet, however I do keep a new spare drive taped to the unit.
I had grand plans to make a giant USB ZFS volume from tradeshow thumb drives in bulk.
It lasted a couple evenings while I bumped into apparently every Linux USB driver/chipset issue possible trying to drive a couple hundred drives via cheapo AliExpress USB hubs.
It was fun times, but even more useless than I originally thought it would be. It was amusing to get >1GBps for a minute or two across 256 crappy thumb drives though!
From 2009, what I think was a famous video on which some kids in the UK put together a 6-gigabyte RAID with 24 256-gigabyte Samsung drives: https://youtu.be/26enkCzkJHQ
NEAT and neuroevolution in general are interesting approaches. I also suggest to check techniques like DENSER [1] that can be used to evolve deep networks (by using the evolutionary part on the network structure and not on the weights).
Genetic Programming (GP), however, has not evolved to NEAT (which itself is not very recent, being published in 2002) but simply neuroevolution has become one of the topics that are part of evolutionary computation (EC). For example, one of the largest yearly conferences on evolutionary computation (GECCO) [2] was just last month with both neuroevolution and GP tracks. It is however true that the success of neural techniques had an effect on the community, some effects are the discussion of the role of EC and, for example, more space given to hybrid works (see, for example, the joint track on evolutionary machine learning [3] inside the evostar event).
Related to the original post, a place where some recent research on GP can be found are the proceedings of GECCO (GP track), EuroGP (part of evostar), PPSN (Parallel Problem Solving from Nature), and IEEE CEC (IEEE Congress on Evolutionary Computation) and journals like Genetic Programming and Evolvable Machine (GPEM), Swarm and Evolutionary Computation (SWEVO), and IEEE Transactions on Evolutionary Computation (IEEE TEVC). The list is not exhaustive, but those are some well-known venues.
For a less "daunting" starting point, some recent techniques are being added to the SRBench benchmark suite [4], with links to both the code and the paper describing the technique.
[1] Assunção, F., Lourenço, N., Machado, P., & Ribeiro, B. (2019, March). Fast denser: Efficient deep neuroevolution. In european conference on genetic programming (pp. 197-212). Cham: Springer International Publishing.
That is exactly what I would also like to do. The HN comments are usually as interesting as the the link in the post. For example people point out other resources or the author of the linked post comments with more details.
I still have no clear solution to this, I currently use org mode but 1) it does not work well on mobile and 2) even on the desktop I should find the time to automate it more (with automatic archiving, better templates captures, etc). While 2) can be solved, 1) is still a problem.
Does it also have a way to deal with sets of "related" links? For example I might want to save and archive a webpage and its discussion on HN together or a collection of multiple blog posts as single entity. Currently I use org-mode but, even with org-capture, it is not frictionless and the mobile integration is lacking.
In Italy it is one of the three metrics used. The others are the number of published papers and the h-index.
This is the table of thresholds for associate professorship ("II Fascia") and full professorship ("I Fascia"):
http://abilitazione.miur.it/public/documenti/2018/Tabelle_Va...
"Numero articoli" is "number of papers", "Numero citazioni" is "number of citations", and "Indice H" is "h-index". The thresholds are different for each research area (e.g., "INFORMATICA" is "computer science").
The topic is actually a little bit more complex, since looking at the metrics is only one step of the process.
My Raspberry Pi 3 works as a CUPS print server connected to a laser printer, as a pi-hole DNS server to filter ads, and as a ssh entry point (with dynamic DNS). I would use a Pi zero, but Ghostscript is not too happy when printing large documents on a single-core processor with 512 MB of ram. I still have to find the time to set-up a backup server on it (and decide which software to use).