If you're interested in what life in ancient Greece / Rome was like. The author does a very good job at not going into too much detail and boring the reader, while still telling you all the important events and changes that took place.
It is a very multi-dimensional issue. While all these languages belong to the same family, they also take some time to learn, and each has its specific strengths. It is probably very helpful to think about what matters most to you, and pick what matches your needs best for the first language.
Here some axes of distinction:
1. supported programming styles
2. support for concurrency
3. performance of generated code and parallelism
4. floating-point performance
5. level of standardization
6. closeness to the system and capability to call into C functions, or functions with C ABI
7. suitability for scripting and stand-alone programs
8. GUI programming
9. Comprehensiveness and beginner-friendliness of documentation
10. REPL Programming
11. Libraries
12. Licenses
Here what I know about the languages you name:
(I also mention ABCL = Armed Bear Common Lisp a few times, just to illustrate that you also can run Common Lisp on the Java Platform).
1. supported paradigms - Clojure is very opiniated in supporting and demanding a purely functional style. It uses purely functional data structures. Racket and Scheme support a functional style well, but allow for seamless imperative code. Common Lisp is agnostic, one can program in a purely functional way and there are libraries with purely functional data structures, but it requires much more discipline.
2. Clojure has best support for concurrency and server-like tasks, it is made for that. Racket also supports green threads, apart from its places parallelism.
3. Racket, Chez, Common Lisp and so on support OS level threads and parallelism. Chez and Common Lisp stand out as they generate the most performant code - when it comes to raw computing power, single-threaded SBCL code will be faster than multi-threaded Clojure code. SBCL, for example, also allows to add compiler hints which generate unsafe code with better optimizations (for example, indicating that integer values will only be in a specific range).
4. SBCL has by far the best floating-point performance, I think Chez comes after that, and I'd expect Racket to improve further here. Racket has also been reviewed very favourably for scientific applications (https://khinsen.wordpress.com/2014/05/10/exploring-racket/ - Konrad Hinsen, the author, was an early contributor to Numerical Python).
5. Of all these, Common Lisp is standardized most. Especially Racket and Clojure are defined by their implementation.
6. Clojure and ABCL allow to call into Java. In turn, all of Common Lisp, Guile, Racket, Chez allow to call easily into C. There is a performance difference, I think, between most Schemes and SBCL: Calling into a C function has some extra cost because the stack is handled differently (I think it is because of continuations). I also tried to call into Rust functions from Racket and that works very nicely.
7. Clojure's startup time is simply too slow for scripting. It is also a bit hampered in that it runs on the JVM. There is babashka, which is interesting but has no mature status. Conversely, Common Lisp and Racket are well suited to scripting. It is also possible to compile SBCL and Racket programs into a single executable. I think SBCL supports this case best, for running Racket programs, an extra runtime library or a standard Racket installation is needed. Guile is also well-suited to scripting and is closer to the OS than some other Lisps.
8. Racket has a very nice support for GUI programming with a platform-independent functional API. Clojure and ABCL also allow to call into Swing and Fx code written in Java. I have not tried the rest but it looks relatively painful and brittle in comparison.
9. Racket has very comprehensive and high-quality documentation good for beginners. Clojure is also comprehensively documented, but might assume a bit more experience. Common Lisp is, as an open system, more eclectic, and this might lead especially beginners to underestimate how mature and good it is. It is very under-hyped compared to Clojure. But there is the Common Lisp Cookbook on the web, which is really good. I can also warmly recommend the books "Practical Common Lisp" by Peter Seibel, and "Common Lisp Recipes" by (corrected!) Edmund Weitz. If you want to have true understanding, and are about to write industry-grade robust software, they are really worth every cent.
10. REPL Programming: Lisps and Schemes such as Racket have a subtle difference which affects the style of programming. In Lisp, everything is dynamic and you can re-load and evaluate parts of the program when it is running. This makes for a great development environment. In contrast to that, Racket for example has a clear distinction between compilation time and run time, which gives certain safety and correctness guarantees, but requires that the program is re-loaded more often during development.
11. Libraries: Racket has a distinct "Batteries included" feeling which makes it nice for beginners. Clojure uses Leiningen which runs on top of Maven to automatically retrieve libraries, it works very well. Racket has a similar system. Common Lisp is more eclectic again, what is used today is quicklisp and asdf (see the Common Lisp Cookbook). There is also a reasonable support in OS Packages e.g. in Debian, and in addition GNU Guix is a very interesting solution for packaging of Lisp libraries and in polyglot projects.
12. Licenses: Clojure has a non-copyleft license (Eclipse License) which might be an obstacle if you want to publish copyleft libraries and code (not sure about the exact limits here but the problem is that in a Lisp system, there is less clear of a boundary of system or language code and your own code, so it is probably advisable to use a language with compatible license if you want to publish copyleft code). Racket is Apache 2 which is GPL compatible. Guile is, as part of the GNU system, GNU LGPL.
In summary, I think Racket is great for beginners, although you could start with all of them. Clojure is fantastic for servers and to learn about functional programming. Guile and Racket are great for scripting. Common Lisp, and especially SBCL requires a bit more learning effort at the beginning, but it is a really mature and highly performant system which gives you a lot of freedom.
Does anyone have any advice on how to make the most out of books like these? I'm trying to read more textbooks in subjects that I really phoned in during college this year. My current strategy is to make some note cards and take notes (which is kind of tedious), wondering if anyone has any advice as to their own workflows.
Clojure suffers some because of the reality of today's game dev ecosystem. Just looking at the source of some past attempts to wrap things like LWJGL or OpenGL (hint, none of these tend to look like Clojure anymore) should be enough to answer some of the questions why this is. There are many others reasons as well that only come to bite some people until they are in the middle of a project. Beyond those reasons, even just dealing the JVM or the CLR for a game present significant challenges to a developer.
Unity itself has had to work around challenges too, and the abstractions and development comforts it provides are not free. Even experienced Unity developers who use C# still have to be mindful of how they write their C# (i.e. don't write the prettiest code even if you can because it will punish you). Of course this is true of any language and for other platforms like in mobile game dev. The point is that it's especially limiting to have additional layers of abstraction largely beyond your control. Eventually, it reaches a critical mass where you're writing this weird meta-language to do what you want because writing a game with the ecosystem around you forces that instead of letting you write idiomatic code in the language you picked.
The further you get away from the metal for a game, no matter how simple, the more you will face problems. It's nice to use languages like Clojure, Python, Ruby, JavaScript and so on for games, but for serious work they often get in your way. For instance a common problem the average developer encounters is the game loop vs. frame rate - how do I get enough done during a tick to not grind the game to halt? Garbage collection, de/allocations, and so much more become your enemy and you start to feel like you're fighting some kind of magical force trying to slow down your game or make it less predictable, rather than being productive or even optimizing it in sane ways. And yes, predictability is vital to writing a good game, because the last thing a player wants is your game to do stupid things at inopportune moments like in the middle of a jump, never mind other concerns like debugging, multi-player, or platform requirements.
Of course there are workarounds for many problems you may face, but as game complexity grows, things tend to scale out of control for most people. Many of these problems cut so much in to the time or make you have other sacrifices that you start to feel like you're largely missing the benefits of working in these alternative abstractions. At some point you just end up breaking all the rules of your language/tools/libraries to get the game to the level you want. Worse, you're working on many problems that are quite far from actually finishing your game. Obviously for simple projects, much of what I've mentioned previously is not a problem, just to again make that clear.
Getting back to Clojure, I feel it really suffers from the aforementioned issues for non-toy games. This isn't an indictment of Clojure, just about picking the right tools. Immutability, atoms, refs, agents, CSP, sequences, transducers, recursion, and so much more seem like they would allow making a game quicker, easier, and with less headaches. What ends up happening to most people I've seen who try to use these kinds of tools, whether it is Clojure, Lisp, Haskell, Elixir, or anything else is that at a certain threshold of requirements, is what I mentioned earlier - it all falls apart. At this point, you spend all your time removing all the goodness the language and tools provide. You start writing your own libraries, often down to numbers, matrices, etc. because you have no other choice if you want things to run in a sane, predictable way and to integrate with anything like OpenGL, input libraries, hardware, SDL, and so on. You throw out immutability in huge parts of your game, and you realize that refs, agents, channels, sequences, and more are just making life worse, not better. Pretty soon the entire language is stripped down into something almost unrecognizable, left with only a few core nice things. You then descend into the next layer of hell and start porting things into Java and calling them from there. Even in Java this can happen to a large degree. Add in more unpredictable stuff and abstractions like Unity, Unity plug-ins/add-ons, multi-platform requirements, talking to other libraries you want to use, and so on. The author of the article mentioned simplicity as a selling point, but for non-trivial contexts, you will almost certainly throw simplicity out the door. It starts small and snowballs as I described.
For someone building a text adventure or other simple game, you probably don't even need Unity anyway. If you're building a smaller indie game or want to get something done quickly, just use Unity and C#, and you'll get it done quicker and benefit from the ecosystem better. If you can't/won't learn C#, you shouldn't be programming or making games. I know that sounds cruel, but at some point we all need to acknowledge our skills. A game developer should be able to learn any language and be productive in it quickly. The average game dev may not touch the entire game, but more sophisticated games often use several languages, especially if you count things like shaders and scripting engines as being distinct.
If you're just learning/new to game dev and/or really want to learn Unity, just use it as intended, otherwise you're adding more layers of abstraction and complications that make it actually harder to learn anything, and worse to get things done. It may often seem like you figured something out and using your favorite tool will get things done quicker, but most of the time you'll hit the ugly thresholds I described when you try to combine it with something more sophisticated like Unity. Use things as they are intended. If you want to make a game in Clojure, great, just keep it simple, write your own minimal engine optimized for Clojure or hope someone makes one someday, or go the ClojureScript route to again make something simple.
In summary, Clojure is indeed an awesome language and you can write a game with it, just I wonder what is to gain using it for Unity. In the general sense of things, I wouldn't recommend layering too many abstractions when building games. If you feel otherwise, I'll refer you to the graveyard of projects that have tried to take X and make it work with Y - it is a huge, sad place. That said, I'd love to write a non-trivial game in Clojure or another functional language one day, somehow.
Can I hijack this to ask how best to 'learn Unix' (or Linux specifically, i confess I'm not actually sure on where the line is, my only non-Linux Unix experience is with Mac)?
I'd like to know more about namespaces, file organisation - what 'should' be where - processes and syscalls, etc.
(To be honest man pages just don't suit me for deliberate learning, I treat them the same as --help outputs, whereas for an answer to this question, for example, I'd find a textbook more helpful.)
Maybe I'm in the minority but I prefer keeping my GNU/Linux and Windows installations separate, with each OS on its own drive. My Linux setup is secure, free of proprietary software and under my full control. When I run tcpdump, I'm met with a clean log where every packet is one I recognize. I get to use my favorite window manager (awesomewm) and I don't have to worry about forced updates. My Windows install is quite a different beast - automatic updates, mostly proprietary software and no major customizations other than performance tweaks and what the OS allows. I use it for gaming and media, and it works great. Boot times are very short with SSDs so restarting is not a problem. No compatibility issues, no fussing about with drivers and no need for translation layers like the Windows Subsystem or WINE; just two independent OSs that never let me down.
That said, no hate toward this project. Arch Linux is probably my favorite distro (although I'm on Xubuntu at the moment).
Use Practical Common Lisp [0] as the book to start learning Lisp, Common Lisp is the more production version lisp.
Download LispWorks Personal Edition [1], if you want to start working with an IDE setup. Use it when working through PCL. LW also has GUI library, mobile runtimes and other libraries available from QuickLisp [2], CL's package manager to install various libraries.
After working through PCL, you will have a good CL foundation. You can expand your macro (pun unintended) knowledge by working through [3].
Other good resources: PAIP and Land of Lisp. Note about PAIP is more a specific application of CL to solve classical AI Problems, despite that it is still counted as among the best programming books out there. Hope this helps and `Welcome to the Dark Side`.
PS: Except `Land of Lisp` cited in the resources, everything is free.
Article author here. So, you're right that while AWS does continue to lower prices, they're still not the cheapest game in town. Frankly, they're not even necessarily the most performant game in town.
What they are really competing on is breadth and depth of service. The article goes into a lot of those services, but, as one example, if you launch an instance in EC2 you can allow it to access secured buckets in S3 without any need to store keys/passwords on the instance itself thanks to IAM roles.
Another example is services like AWS Lambda, which is a hosted way to run a function without any need to manage servers.
The list goes on and on...direct VPN connectivity, Hosted Active Directory, CloudHSM. While I'm biased, my perception is that AWS is pretty far ahead of the pack.
Out of curiousity, why go with AWS when Linode, Digitalocean, etc appear to be so much more cost effective? Is the simplicity of spinning up AWS instances really great enough to counterbalance what appears to be a significantly greater cost? Is it the flexibility of different AWS services?
"Nothing in this world can take the place of Persistence.
Talent will not; nothing is more common than unsuccessful men with talent.
Genius will not; unrewarded genius is almost a proverb.
Education will not; the world is full of educated derelicts.
Persistence and determination alone are omnipotent.
The slogan 'Press On" has solved and always will solve the problems of the human race. ~ C Coolidge
James Cameron ("Avatar", "Titanic", etc.) used to argue that high frame rate was more important than higher resolution.
If you're not in the first few rows of the theater, he once pointed out, you can't tell if it's 4K anyway. Everyone in the theater benefits from high frame rate. This may be less of an issue now that more people are watching on high-resolution screens at short range.
Cameron likes long pans over beautifully detailed backgrounds. Those will produce annoying strobing at 24FPS if the pan rate is faster than about 7 seconds for a frame width. Staying down to that rate makes a scene drag.
Now, Cameron wants to go to 4K resolution and 120FPS.[1] Cameron can probably handle that well; he's produced most of the 3D films that don't suck. He's going to give us a really nice visual tour of the Avatar world. For other films, that may not help. "Billy Lynn's Long Halftime Walk" was recorded in 3D, 4K resolution and 120FPS. Reviews were terrible, because it's 1) far too much resolution for close-ups, and 2) too much realism for war scenes. Close-ups are a problem - do you really want to see people's faces at a level of detail useful only to a dermatologist? It also means prop and costume quality has to improve.
The other issue with all this resolution is that it's incompatible with the trend towards shorter shot lengths. There are action films with an average shot length below 1 second. For music videos, that's considered slow; many of those are around 600ms per shot.[2] They're just trying to leave an impression, not show details.
For anyone who's tried to write a real-world RSS feed reader, this format does little to solve the big problems the newsfeeds have:
* Badly formed XML? Check. There might be badly formed JSON, but I tend to think it'll be a lot less likely.
* Need to continually poll servers for updates? Miss. Without additions to enable pubsub, or dynamic queries, clients are forced to use HTTP headers to check last updates, then do a delta on the entire feed if there is new or updated content. Also, if you missed 10 updates, and the feed only contains the last 5 items, then you lose information. This is the nature of a document-centric feed meant to be served as a static file. But it's 2017 now, and it's incredibly rare that a feed isn't created dynamically. A new feed spec should incorporate that reality.
* Complete understanding of modern content types besides blog posts? Miss. The last time I went through a huge list of feeds for testing, I found there were over 50 commonly used namespaces and over 300 unique fields used. RSS is used for everything from search results to Twitter posts to Podcasts... It's hard to describe all the different forms of data it can be contain. The reason for this is because the original RSS spec was so minimal (there's like 5 required fields) so everything else has just been bolted on. JSONFeed makes this same mistake.
* An understanding that separate but equal isn't equal. Miss. The thing that http://activitystrea.ms got right was the realization that copying content into a feed just ends up diluting the original content formatting, so instead it just contains metadata and points to the original source URL rather than trying to contain it. If JSONFeed wanted to really create a successor to RSS, it would spec out how to send along formatting information along with the data. It's not impossible - look at what Google did with AMP: They specified a subset of formatting options so that each article can still contain a unique design, but limited the options to increase efficiency and limit bugs/chaos.
This stuff is just off the top of my head. If you're going to make a new feed format in 2017, I'm sorry but copying what came before it and throwing it into JSON just isn't enough.
If you're interested in what life in ancient Greece / Rome was like. The author does a very good job at not going into too much detail and boring the reader, while still telling you all the important events and changes that took place.