Hacker News new | past | comments | ask | show | jobs | submit | more mitnk's comments login

Yeah, I submited once on HN about this project two years ago. Since then quite a few new features have been added and some improvements are made. So it's time to expand the user base a bit :)

Now cicada shell is still have some issues, but overall it's quite usable. I hope it could be useful for people who are seeking simplicity and speed in daily use of the shell.


It's fine to repost after that much time has gone by. This is in the FAQ: https://news.ycombinator.com/newsfaq.html.


But the link back to the earlier post is also nice to have, thanks.


> This is very memory intensive.

Only for ones not familiar with awk.

It would make a lot of sense after you understand how awk works (as the article explains).


That does not make any sense. If it is memory intensive depends on awk, not on the person being familiar with it.

Sois it memory intensive or not?


The example AWK script will build an array of every unique line of text in the file.

If the file is large, and mostly unique, then assume that a substantial portion of the file will be loaded into memory.

If this is larger than the amount of ram, then portions of the active array will be paged to the swap space, then will thrash the drive as each new line is read forcing a complete rescan of the array.

This is very handy for files that fit in available ram (and zram may help greatly), but it does not scale.


I don't know how awk (or this particular implementation) works, but it could be done such that comparing lines is only necessary when there is a hash collision, and also, finding all prior lines having a given hash need not require a complete rescan of the set of prior lines - e.g. for each hash, keep a list of the offsets of each corresponding prior line. Furthermore, if that 'list' is an array sorted by the lines' text, then whenever you find the current line is unique, you also know where in the array to insert its offset to keep that array sorted - or use a trie or suffix tree.


Sure, you only need to compare when there's a hash collision, but you still need to keep all the lines in memory for later comparison.


Sure (though they could be in a compressed form, such as a suffix tree), but that wasn't the issue I was addressing.


AWK was the first "scripting" language to implement associative arrays, which they claim they took from SNOBOL4.

Since then, perl and php have also implemented associative arrays. All three can loop over the text index of such an array and produce the original value, which a (bijective) hash cannot do.


I think they're talking about your machine's memory, not human memory.


The man page of (n)awk [0][1] is surprisingly short and readable.

[0] `man awk` on mac

[1] online version https://www.mankier.com/1/nawk

[2] gawk's man page works great as a reference https://www.mankier.com/1/gawk


You need xonsh - http://xon.sh/ :)


You're right, I consider Cicada is the "old" generation shell. I intend to keep it simple (for speed etc). Like in readme, it won't introduce functions or other complex stuff. But still can add some feature for my own needs.

For modern shell, please check out xonsh: http://xon.sh/ - powered by Python - It's super cool!


Yes, I haven't provided the pre-built binaries yet. Maybe I should..


Check out snapcraft, it can build rust binaries!


seems an implementation of the book - Code https://www.amazon.com/dp/B00JDMPOK2/ref=dp-kindle-redirect?...


GIMP stuck on Mac at times. Hope this version be better. Thanks for sharing!


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: