Hacker News new | past | comments | ask | show | jobs | submit | celrod's comments login

> I don't want my web browser or video player to be resized because I open a new program

I've been using niri (a tiling WM) recently. This is their very first design principle: https://github.com/YaLTeR/niri/wiki/Design-Principles Maybe other PaperWM-inspired WMs are similar. niri is the first I've used.

If your windows within a workspace are wider than your screen, you can scroll through them. You also have different workspaces like normal. I'll normally have 1 workspace with a bunch of terminals, and another for browsers and other apps (often another terminal I want to use at the same time as browsing, e.g. if I'm looking things up online).


Do you not often quickly look between files? If so, odds are you're using tiles within tmux, vim, emacs, vscode, or something.

I use kakoune, which has a client/server architecture. Each kak instance I open within a project connects to the same server, so it is natural for me to use my WM (niri) to tile my terminals, instead of having something like tmux or the editor do the tiling for me. I don't want to bother with more than one layer of WM, where separate layers don't mix.


How feasible would it be for something like gdb to be able to use a C++ interpreter (whether icpp, or even a souped up `constexpr` interpreter from the compiler) to help with "optimized out" functions?

gdb also doesn't handle overloaded functions well, e.g. `x[i]`.


GDB does have hooks for interpreters to be executed within it, but I haven't managed to make this work. https://sourceware.org/gdb/current/onlinedocs/gdb.html/JIT-I....


It does though? Just compiled a small program that creates a vector, and GDB is perfectly happy accessing it using this syntax. It will even print std::string’s correctly if you cast them to const char* by hand. (Linux x86-64, GDB 14.2.)


I've defined a few pretty printers, but `operator[]` doesn't work for my user-defined types. Knowing it works for vectors, I'll try and experiment to see if there's something that'll make it work.

  (gdb) p unrolls_[0]
  Could not find operator[].
  (gdb) p unrolls_[(long)0]
  Could not find operator[].
  (gdb) p unrolls_.data_.mem[0]
  $2 = {
`unrolls_[i]` works within C++. This `operator[]` method isn't even templated (although the container type is); the index is hard-coded to be of type `ptrdiff_t`, which is `long` on my platform.

I'm on Linux, gdb 15.1.


> This `operator[]` method isn't even templated (although the container type is)

That might be it. If that operator isn’t actually ever emitted out of line, then GDB will (naturally) have nothing to call. If it helps, with the following program

  template<typename T>
  struct Foo {
      int operator[](long i) { return i * 3; }
  };
  
  Foo<bool> bar;
  template int Foo<bool>::operator[](long); // [*]
  
  int main(void) {
      Foo<int> foo;
      __asm__("int3");
      return foo[19];
  }
compiled at -g -O0 I can both `p foo[19]` and `p bar[19]`, but if I comment out the explicit instantiation marked [*], the latter no longer works. At -g -O2, the former does not work because `foo` no longer actually exists, but the latter does, provided the instantiation is left in.


Can confirm, this works for me in my actual examples, thanks!


> It will even print std::string’s correctly if you cast them to const char* by hand

What does that mean? I think `print str.c_str()` has worked for me in GDB before, but sounds like you did something different.


I was observing that `p (const char *)str` also worked in my experiment, but I’m far from a C++ expert and upon double-checking this seems to have been more of an accident than intended behaviour, because there is no operator const_pointer in basic_string that I can find. Definitely use `p str.c_str()`.


If your std::string was using a short string optimization, that would explain the “accident”.

Some implementations even put char[0] at the first byte in the optimized form.


That explanation doesn't work IMO, unless `str` is a std::string pointer, which is contrary to the syntax GP suggested with `str.c_str()`.

It doesn't seem possible in actual C++ that the cast from non-pointer to pointer would work at all (even if a small string happens to be inlined at offset 0.) Like GP, I looked for a conversion operator, and I don't think it's there. Maybe it is a feature of the gdb parser.


Good point, but if it’s a long string, 2/3 of the most common implementations would make the first word the c_str()-equivalent pointer:

https://devblogs.microsoft.com/oldnewthing/20240510-00/?p=10...


So it's actually printing *(const char **)&s?


The first pointer-sized chunk of the string structure is a pointer to the C-string representation. So the cast works as written.


Well, no, because (const char *)str is nonsense, if str is an std::string.


Not to the debugger. If the first 8 bytes of the object referenced by str is a char* the debugger is perfectly capable of using it that way.


this "optimized out" thing is bullshit as hell


Skymont little cores have 4x 128-bit execution. They could quadruple-pump.

But looks more like they're giving up on people writing code for wide vectors, instead settling on trying to make the existing code faster.


Any suggestions for ECC?

Would you suggest going with an ASRock Rack motherboard, even for desktop use, like you used here? https://www.phoronix.com/review/amd-ryzen9-ddr5-ecc

I'm strongly tempted to get a Zen5 CPU, but am unsure of the motherboard.


I haven't yet tested ECC with any Zen 5 desktop CPU. But yes in general with Zen 4 that ASRock Rack and Supermicro boards have worked out well. With time will try out ECC on Ryzen 9000 series.


Zen5 appears to officially support up to DDR5 5600, but unfortunately all of the ASRock Rack or Supermicro boards I looked at only supported DDR5 5200.

I may wait for new Zen5 boards, or maybe take a gamble on something like the Asus ProArt, where I saw comments online indicating that ECC is (unofficially?) supported.

Looking forward to Ryzen 9000 ECC benchmarks.


Or other ASUS mainboards. For now ASUS seems to be the only desktop mainboard manufacturer that officially mentions in the docs support of "ECC and Non-ECC, Un-buffered Memory".


Yes, I see now that while not advertised on seller's websites, Asus's product pages do indeed say that.


Signed integer overflow being undefined has these two consequences for me: 1. It makes my code slightly faster. 2. It makes my code slightly smaller. 3. It makes my code easier to check for correctness, and thus makes it easier to write correct code.

Win, win, win.

Signed integer overflow would be a bug in my code.

As I do not write my own implementations to correctly handle the case of signed integer overflow, the code I am writing will behave in nonsensical ways in the presence of signed integer overflow, regardless of whether or not it is defined. Unless I'm debugging my code or running CI, in which case ubsan is enabled, and the signed overflow instantly traps to point to the problem.

Switching to UB-on-overflow in one of my Julia packages (via `llvmcall`) removed like 5% of branches. I do not want those branches to come back, and I definitely don't want code duplication where I have two copies of that code, one with and one without. The binary code bloat of that package is excessive enough as is.


Agreed. If anything, I'd like to have an unsigned type with undefined overflow so that I can get these benefits while also guaranteeing that the numbers are never negative where that doesn't make any sense.


That's what zig did, and they solved the overflow problem by having seperate operators for addition and subtraction that guarantee that the number saturates/wraps on overflow.


I don't think you'd even necessarily need to ignore. Roll it out in phases. You aren't going to have to deliver the final finished solution all at once.

Some elements are inevitably going to end up being de-prioritized, and pushed further into the future. Features that do end up having a lot of demand could remain a priority.

I don't think this is even a case of "ask for forgiveness, not permission" (assuming you do intend to actually work on w/e particular demands if they end up actually continuing to demand it), but a natural product of triage.


> Some elements are inevitably going to end up being de-prioritized, and pushed further into the future. Features that do end up having a lot of demand could remain a priority.

Why, that sounds positively... Agile. In the genuine original sense.


C++20 added `[[no_unique_address]]`, which lets a `std::is_empty` field alias another field, so long as there is only 1 field of that `is_empty` type. https://godbolt.org/z/soczz4c76 That is, example 0 shows 8 bytes, for an `int` plus an empty field. Example 1 shows two empty fields with the `int`, but only 4 bytes thanks to `[[no_unique_address]]`. Example 2 unfortunately is back up to 8 bytes because we have two empty fields of the same type...

`[[no_unique_address]]` is far from perfect, and inherited the same limitations that inheriting from an empty base class had (which was the trick you had to use prior to C++20). The "no more than 1 of the same type" limitation actually forced me to keep using CRTP instead of making use of "deducing this" after adopting c++23: a `static_assert` on object size failed, because an object grew larger once an inherited instance, plus an instance inherited by a field, no longer had different template types.

So, I agree that it is annoying and seems totally unnecessary, and has wasted my time; a heavy cost for a "feature" (empty objects having addresses) I have never wanted. But, I still make a lot of use of empty objects in C++ without increasing the size of any of my non-empty objects.

C++20 concepts are nice for writing generic code, but (from what I have seen, not experienced) Rust traits look nice, too.


It's probably mean for me to say "empty type" to C++ people because of course just as std::move doesn't move likewise std::is_empty doesn't detect empty types. It can't because C++ doesn't have any.

You may need to sit down. An empty type has no values. Not one value, like the unit type which C++ makes a poor job of as you explain, but no values. None at all.

Because it has no values we will never be called upon to store one, we can't call functions which take one as a parameter, operations whose result is an empty type must diverge (ie control flow escapes, we never get to use the value because there isn't one). Code paths which are predicated on the value of an empty type are dead and can be pruned. And so on.

Rust uses this all over the place. C++ can't express it.


Help me out here.

What is this empty type for? Could you provide an old man with a nice concrete example of this in action? I've used empty types in C++ to mark the end of recursive templates - which I used implement typelists before variadic templates were available.

But then you mention being unable to call functions which take an empty type as a parameter. At which point I cease to understand the purpose.


I don't know that I'll be able to convince you but I'll give a couple of examples.

What is the type of the expression "return x" ? Rust says that's ! pronounced Never, an empty type. This expression never had a value, control flow diverges.

So this means we can just use simple type arithmetic to decide that a branch which returns contributed nothing to the type of the expression - it has no possible value. This wasn't a special case, it's just type arithmetic.

Ok, lets introduce another. Rust has a suite of conversion traits. From, Into, TryFrom and TryInto. They're chained, so if I implement From<Goose> for Doodad, everybody gets the three other implied conversions. But the Try conversions are potentially fallible, hence the word Try. So they have an error type. Generic Code handling the Error type of potentially failing conversion will thus be written, even if in some cases the conversion undertaken chained back to my From<Goose> code. But wait, that conversation can't fail! Sure enough the chained TryFrom and TryInto produced will have the error type Infallible, which is an Empty Type.

So the compiler can trim all the error handling code, it depends upon this value which we know can't exist, therefore it never executes.


Got it.

Which of course is equivalent to the statement "I have begun the process of understanding, but do not yet know what I do not know". My old High School teacher used to complain that I claimed understanding long before I actually reached it.

Anyway, thank you, and that seems a clever concept. I can't help but think that it's solving a problem that the language itself created - though that it doubtless an artifact of my as-yet limited understanding.

So "From" has to return something that might be an error, in some way. Just so that the Try... variants can be generated. And generic callers have to write something to handle that error - though presumably concrete callers do not because of the empty type.


> So "From" has to return something that might be an error, in some way. Just so that the Try... variants can be generated

Not quite. From can't fail, but TryFrom for example could fail.

Lets try a couple very concrete examples, From<u16> for i32 exists. Turning any 16-bit unsigned integer into a 32-bit signed integer works easily. As a result of the "chain" I mentioned, Rust will also accept TryInto<i32> for u16. This also can't fail - and it's going to run the identical code, but TryInto has an associated Error type, this must be filled out, it's filled out as Infallible. The compiler can see that Infallible is empty, therefore where somebody wrote error handling code for their TryInto<i32> if the actual type was u16 that Error type will be Infallible, therefore the code using it is dead.

Now, compare converting signed 16-bit integers to unsigned. This can clearly fail, -10 is a perfectly good signed 16-bit integer, but it's out of range for unsigned. So From<i16> for u16 does not exist. But TryInto<u16> for i16 does exist - but this type that really does have an error type, this conversion can and does fail with a "TryFromIntError" type apparently, which I expect has some diagnostics inside it.


Thanks for the clarification.


void is an empty type in C++. It's less useful than it could be, but it does exist.


void isn't a type. If you try to use it as a type you'll be told "incomplete type".

People who want void to be a type in C++ (proponents of "regular void") mostly want it to be a unit type. If they're really ambitious they want it to have zero size. Generally a few committee meetings will knock that out of them.


Multiple accumulators increases accuracy. See pairwise summation, for example.

SIMD sums are going to typically be much more accurate than a naive sum.


Not necessarily either. It's not particularly hard to create a vector where in-order addition is the most accurate way to sum its terms. All you need is a sequence where the next term is close to the sum of all prior ones.

There just isn't a one-size-fit-all solution to be had here.


> There just isn't a one-size-fit-all solution to be had here.

But there is: https://news.ycombinator.com/item?id=40867842


That’s a one size fits some solution. Not more than 2x as slower than native floats is pretty slow for cases where you don’t need the extra precision.

If might be the case that it is the best solution if you do need the extra precision. But that’s just one case, not all.


C++23 added `allocate_at_least`: https://en.cppreference.com/w/cpp/memory/allocator_traits/al...

I'm not sure if any standard libraries have an implementation that takes advantage of the "at least" yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: