It’s more like I have to actually think about where the program options are. Or that on firefox nowadays I have to snipe a tiny small spot on the title bar to move the window, instead of clicking some button that doesn’t look like one.
When I first used MacOS I was surprised about tute consistency to access the settings of every program, even third party, with the same shortcut cmd+,
In my experience people have to first figure out what the hell numpy is and how to get it (venv, conda, pip, uv, uvx, …) because python arrays are shit, and so people fix that wart with an external C library. Then they notice that some other dependency requires a previous python version, but their python is installed globally and other dependencies were installed for that. These are uniquely python-specific problems. Lisp doesn’t have those problems
Did they try using a search engine? But more to the point, if they don't understand what it is, how did they find out it exists?
> how to get it (venv, conda, pip, uv, uvx, …)
uvx is a command from the same program as uv; venv is not a way to obtain packages; and the choice here isn't a real stumbling block.
> because python arrays are shit
I can't say I've seen many people complain about the standard library `array` module; indeed it doesn't seem like many people are aware it exists in the first place.
If you're talking about lists then they serve a completely different purpose. But your use of profanity suggests to me that you don't have any actual concrete criticism here.
> Then they notice that some other dependency requires a previous python version
Where did this other dependency come from in the first place? How did they get here from a starting point of dissatisfaction with the Python standard library?
> but their python is installed globally and other dependencies were installed for that.
... and that's where actual environment management comes in, yes. Sometimes you have to do that. But this has nothing to do with teaching Python. You have necessarily learned quite a bit by the time this is a real concern, and if you were taught properly then you can self-study everything else you need.
> These are uniquely python-specific problems.
No other languages ever require environment management?
That’s not his opinion, that’s the standard technique in systems programming. It’s why there’s software out there that does in fact never crash and shows consistent performance.
If you really care about performance, then you should consider using technologies other than js and python, instead of asking hardware vendors to run their implementations faster.
auto is a historic artifact for porting code from the B language to C, when everything was implicitly int but int did not exist yet. It had absolutely no use afterwards, which is why it was repurposed in C++ as well. In C23 this is done because it is very useful in combination with typeof() in macros, which is a far cry from SFINAE terrorism in C++
I asked about this in their chat 2 years ago, and the CEO responded directly. His response I believe was that you are allowed to use Timescale on managed servers, but you're not supposed to become a Timescale hoster for others.
That may mean that RDS could provide Timescale as an available extension without promoting it directly. However, that is a sufficiently gray legal area that I doubt Amazon wants to wade into it (instead of just making a competing product).
The only way I think one could safely thread the needle here would be for RDS to allow you to bring your own extensions, as then Amazon is definitely not hosting it for you. However, that would compromise the security model and responsibility division that RDS provides.
The worry I have with code like that is that instead of letting the program crash it often just swallows exceptions. This can lead to a lot of silent errors that one becomes aware of way too late, for example after a lot of data has been corrupted for hours or years.
I don't think the use of structs to implement vtables in C is abuse, but in this case it's rather pointless. This idiom is used to have dynamic dispatch (e.g. a plugin system), an abstraction (e.g. same interface for file I/O and memory I/O in unit tests), or for ABI-stable vtables in libraries and C++ wrappers.
Instead of `json_parse(&parser_state)` your code calls `parser_state.parse(&parser_state)` in main. Instead of `json_eof(parser_state)` your code calls `parser_state->eof(parser_state)` in the implementation. This is more to type, more irregular (switching from . to ->), and has worse performance.
This interface is also not flexible enough to implement a parser for other file formats such as XML.
So it doesn't make things easier, and you're not actually making use of dynamic dispatch, so I don't see the point in using this technique.
In general I can not recommend this article for learning about C. I am being lenient about the things the author mentions explicitly, although I find even that mostly hand-waving away too much even for a prototype or short article. For example I have to say it is a bit of a head scratcher for me to hear that JSON is difficult to parse, since the ease of parsing is why JSON won to begin with. The linked article about JSON being a minefield is about underdefined semantics, not parser complexity.
The biggest mistake, and this is sadly hand-waved away as "over-engineering", is the large amount of mallocs and frees, and necessity to track stack vs heap allocations. In general this style of C reminds me of what my CS professors taught us before they showed us Java GC and C++ RAII as our saviors. C has its issues, but this is far from professional grade C code, and is what C looks like when written by programmers who come from higher level languages with less control over memory allocation (or STL-using C++ programmers where RAII mixes allocation and initialization). A tiny bump allocator would simplify all the following code enormously. For example the freeing function is very complex and can be avoided completely. Whether memory is managed by the stack or heap doesn't need to be tracked anymore. It becomes possible to write code that never allocates at all. Consider this article for comparison: https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
The json struct mixes 2 responsibilities: providing data and parsing it. If we assume as in the article that all data is available from the start, then there's a lot of complexity in the parser that could've been avoided, since the parser is written as if it doesn't know where the EOF is and doesn't handle that error case correctly (especially in non-DEBUG build where ASSERT would be disabled).
Returning EOF in-band in cur is bad practice also in C. I would avoid this whole function by providing higher level parsing functions like peek and expect which would provide or check for certain tokens (this is for non-optimized easy to write parsers). The check for null/false/true atoms should've been a code smell that this should be easier to implement somehow.
In modern C null-terminated strings are mostly avoided, and string slices are used instead (begin pointer and size, or begin and end pointers). There are variables in the code called "slice", but they're C-strings. I'd suggest to the author to rewrite the code with slices and compare just how much more readable and efficient it becomes.
The whole json_value struct can also be avoided by using X-macros to define the structs. It's the way how one implements a visitor pattern (with serializer and deserializer functions) without language support for reflection.
When I first used MacOS I was surprised about tute consistency to access the settings of every program, even third party, with the same shortcut cmd+,
reply