Hacker Newsnew | past | comments | ask | show | jobs | submit | more ryao's commentslogin

How? OLED is still the gold standard while MiniLED is a rebrand of full array local dimming LCDs where the zone count is somewhat higher than past models. MicroLED is the true replacement for both. I do not see how MiniLED is on track to outpace anything.


I'm just referring to smaller and smaller LEDs, I don't really care about the marketing terms really. Whatever technology you want to call sufficiently small LEDs, how you market it isn't relevant to me. Pixel-LEDs are the goal.


micro-led monitors are going by 5 figures right now, thats definetly not a consumer or even enthuasist market by 2028.


Apple has an architectural license that lets them build their own ARM cores:

https://www.electronicsweekly.com/news/business/finance/arm-...

It is very unlikely Apple uses anything from ARM’s core designs, since that would require paying an additional license fee and Apple was able to design superior cores using its architectural license.


Yep, Apple was a significant early investor in ARM. https://appleinsider.com/articles/23/09/05/apple-arm-have-be...


That could be because crowdstrike is not inside the XNU kernel anymore:

https://www.crowdstrike.com/en-us/blog/crowdstrike-supports-...

They happily implement a userland version on macOS, but then claimed that being in the kernel is absolutely necessary on Windows after they disabled all Windows machines using it.


Chrome has been very conservative about enabling hardware acceleration features on Linux. Look under about://gpu to see a list. It is possible to force them via command line flags. That said, this is only part of the story.

There are different kinds of transistors that can be used when making chips. There are slow, but efficient transistors and fast, but leaky transistors. Getting an efficient design is a balancing act where you limit use of the fast transistors to only the most performance critical areas. AMD historically has more liberally used these high performance leaky transistors, which enabled it to reach some of the highest clock frequencies in the industry. Apple on the other hand designed for power efficiency first, so its use of such transistors was far more conservative. Rather than use faster transistors, Apple would restrict itself to the slower transistors, but use more of them, resulting in wider core designs that have higher IPC and matched the performance of some of the best AMD designs while using less power. AMD recently adopted some of Apple’s restraint when designing the Zen 5c variant of its architecture, but it is just a modification of a design that was designed for significant use of leaky transistors for high clock speeds:

https://www.tomshardware.com/pc-components/cpus/amd-dishes-m...

The resulting clock speeds of the M4 and the Ryzen AI 340 are surprisingly similar, with the M4 at 4.4GHz and the Ryzen AI 340 at 4.8GHz. That said, the same chip is used in the Ryzen AI 350 that reaches 5.0GHz.

There is also the memory used. Apple uses LPDDR5X on the M4, which runs at lower voltages and has tweaks that sacrifice latency to an extent for a big savings in power. It also is soldered on/close to the CPU/SoC for a reduction needed in power to transmit data to/from the CPU. AMD uses either LPDDR5X or DDR5. I have not kept track of the difference in power usage between DDR versions and their LP variants, but expect the memory to use at least half the power if not less. Memory in many machines can use 5W or more just at idle, so cutting memory power usage can make a big impact.

Additionally, x86 has a decode penalty compared to other architectures. It is often stated that this is negligible, but those statements began during the P4 era when a single core used ~100W where a ~1W power draw for the decoder really was negligible. Fast forward to today where x86 is more complex than ever and people want cores to use 1W or less, the decode penalty is more relevant. ARM, using fixed length instructions and having a fraction of the instructions, uses less power to decode its instructions, since its decoder is simpler. To those who feel compelled to reply to repeat the mantra that this is negligible, please reread what I wrote about it being negligible when cores use 100W each and how the instruction set is more complex now. Let’s say that the instruction decoder uses 250mW for x86 and 50mW for ARM. That 200mW difference is not negligible when you want sub-1W core energy usage. It is at least 20% of the power available to the core. It does become negligible when your cores are each drawing 10W like in AMD’s desktops.

Apple also has taken the design choice of designing its own NAND flash controller and integrating it into its SoC, which provides further power savings by eliminating some of the power overhead associated with an external NAND flash controller. Being integrated into the SoC means that there is no need to waste power on enabling the signals to travel very far, which gives energy savings, versus more standard designs that assume a long distance over a PCB needs to be supported.

Finally, Apple implemented an innovation for timer coalescing in Mavericks that made a fairly big impact:

https://www.imore.com/mavericks-preview-timer-coalescing

On Linux, coalescing is achieved by adding a default 50ms slack to traditional Unix timers. This can be changed, but I have never seen anyone actually do that:

https://man7.org/linux/man-pages/man2/pr_set_timerslack.2con...

That was done to retroactively support coalescing in UNIX/Linux APIs that did not support it (which were all of them). However, Apple made its own new API for event handling called grand central dispatch that exposed coalescing in a very obvious way via the leeway parameter while leaving the UNIX/BSD APIs untouched, and this is now the preferred way of doing event handling on MacOS:

https://developer.apple.com/documentation/dispatch/1385606-d...

Thus, a developer of a background service on MacOS that can tolerate long delays could easily set the slack to multiple seconds, which would essentially guarantee it would be coalesced with some other timer, while a developer of a similar service on Linux, could, but probably will not, since the scheduler slack is something that the developer would need to go out of his way to modify, rather than something in his face like the leeway parameter is with Apple’s API. I did check how this works on Windows. Windows supports a similar per timer delay via SetCoalescableTimer(), but the developer would need to opt into this by using it in place of SetTimer() and it is not clear there is much incentive to use it. To circle back not Chrome, it uses libevent, which uses the BSD kqueue on MacOS. As far as I know, kqueue does not take advantage of timer coalescing on macOS, so the mavericks changes would not benefit chrome very much and the improvements that do benefit chrome are elsewhere. However, I thought that the timer coalescing stuff was worthwhile to mention given that it applies to many other things on MacOS.


I am under the impression that George Marsaglia‘s work showed that there are no good values for this class of PRNGs. That is why he devised so many other classes of PRNGs.


Quoting the start of the paper:

> While MCGs and LCGs have some known defects, they can be used in combination with other pseudorandom number generators (PRNGs) or passed through some output function that might lessen such defects. Due to their speed and simplicity, as well as a substantial accrued body of mathematical analysis, they have been for a long time the PRNGs of choice in programming languages.

EDIT: And going further, they call out Marsaglia's work in particular, it seems.


In particular Marsaglia's KISS generator combines LCGs with other generators. https://en.m.wikipedia.org/wiki/KISS_(algorithm)


You don’t need a speed test website to see this problem. Just run a ping on your own while doing a big download that saturates your connection and bufferbloat will happen unless there is some active queue management to prevent the ping packets from waiting in the queue behind the download packets. This happens anytime that there is a fast/slow transition in the internet and the slow connection cannot keep up. To prevent packet loss, the packets will be buffered, which works well for short spikes, but prolonged activity will result in a noticeable backlog and if buffers are allowed to be sufficiently big, you can get arbitrarily long delays, which are visible in ping times.

The worst that I have ever seen was about 30 seconds when visiting a foreign country where bufferbloat was occurring at peering links. The bufferbloat in peering links is likely visible from western countries if you ping residential IPs in developing countries and monitor the ping times over days. Some parts of the day will have very high ping times while others will not. The high ping times will be the buffer bloat.

In most western countries, the bufferbloat typically occurs at people’s home internet connections. As is the case in all cases of buffer bloat, the solution is to be willing to drop packets when the connection is saturated. If you limit the bandwidth just below what the connection can handle, you can do active queue management to solve the problem.

That said, I suggest you stop posting replies. Your crusade against the idea of buffer bloat makes you look bad to anyone with enough networking knowledge to understand what bufferbloat is. I also strongly suspect I wrote an explanation that you will take zero time to understand and rather than take my advice, you will post another reply to continue your crusade. :/


You are 100% correct.

It is not yet a "solved" problem, but 10-15 years have started to make a dent and get better tools to both observe and act on the problem.

This is seen everywhere from the inclusion of CAKE ( https://man7.org/linux/man-pages/man8/tc-cake.8.html ) in some CPE / home router, but the use of fq_codel ( https://man7.org/linux/man-pages/man8/tc-fq_codel.8.html ) in routers along the way.

Other ISPs have to go even farther, because "content" might be 80-120ms away, and the ability to be more aggressive or less aggressive in tuning certain parameters can have a large impact on overall customer Quality of Experience. If there are any LEO hops along the way, problems with TCP and delayed signaling as a byproduct can also make throughout tank while latency spikes.

DPDK and VPP have contributed to a lot of new networking devices to help observe and act on traffic.

Everytime you go from a big pipe to a small pipe (higher data rate to lower data rate) connection you will see this issue at varying levels.


Do you have links to information on what DPDK and VPP are doing in the area of bufferbloat? I have not kept up with them since I cannot use them in my day to day life, but I would love to update myself on the subject.

By the way, when I wrote in another comment that bufferbloat was solved, I meant it in the same way that IPv4 exhaustion is solved by IPv6. We have the ability to deploy solutions that largely fix things, but whether we do is another matter. You are right to say that the past 10-15 years have started to make a dent in the problem. I had not meant to suggest otherwise.


Take a look at any of the QoE type vendors out there right now.

https://www.bequant.com/


Thanks for the reply and confirming what I had already said earlier in regards to detecting telltale signs of bufferbloat. In case you were aware, a controlled experiment to exhibit bufferbloat doesn't translate to users being materially affected.

The worst that I have ever seen was about 30 seconds when visiting a foreign country where bufferbloat was occurring at peering links. The bufferbloat in peering links is likely visible from western countries if you ping residential IPs in developing countries and monitor the ping times over days. Some parts of the day will have very high ping times while others will not. The high ping times will be the buffer bloat.

Out of curiosity, did you have full observability of these peering links, or is this a hypothesis? I could think of a few scenarios where alternative explanations could explain what you're seeing.

In most western countries, the bufferbloat typically occurs at people’s home internet connections.

Says who? How is this measured? Do we have actual numbers on people experiencing real bufferbloat issues that are affecting their service?

That said, I suggest you stop posting replies. Your crusade against the idea of buffer bloat makes you look bad to anyone with enough networking knowledge to understand what bufferbloat is. I also strongly suspect I wrote an explanation that you will take zero time to understand and rather than take my advice, you will post another reply out of ignorance. :/

Look, I will cordially suggest a more tenable approach: consider disengaging from this thread, your vacuous and vapid post hasn't really brought anything to the table.

Edit: Seems I can't reply to the child comment, so I'll just say, you should've used your own advice and not reply. There's nothing of substance and you're still continuing with your daft misinterpretation of my take. I'll leave it at that.


Overanalysis for the sake of denying the existence of whatever you want is cliche. It does not matter how complete the information on a subject is, since you will just post more pointless questions, whose relevance is specious, for the sake of claiming there are non-existent issues in understanding. The last time I saw this used involved a very loquacious guy who denied Darwin’s theory of evolution. It can also be used to claim the world is flat.

I was being generous by advising you to stop posting, since the more you post asinine things, the worse you look. In the past, I have taken the liberty to do amateur psychoanalysis of people who post bizarre things online based on a psychology class I took in college. If I keep responding, it will only be to get you to post more so that I can work out what is wrong with you for my own curiosity. I am probably not the only one thinking this.


I was being generous by advising you to stop posting, since the more you post asinine things, the worse you look. In the past, I have taken the liberty to do amateur psychoanalysis of people who post bizarre things online based on a psychology class I took in college. If I keep responding, it will only be to get you to post more so that I can work out what is wrong with you for my own curiosity. I am probably not the only one thinking this.

Look, let's call this what it is: gatekeeping. Furthermore you deflect and avoid answering a real question. I don't think you actually understood the crux of what I'm saying and instead resorted to ad hominems and gatekeeping, but seeing as it went over your head, I will pose the question: does bufferbloat have more than a marginal affect on the Internet experience of end users in real world conditions (not in a controlled experiment), furthermore does it affect a significant population, as of today in the 2020s as opposed to circa 2010? I'm saying no to both; a good way to gauge whether it is still relevant is to see publications in networking conferences and journals or even discussions by the *NOG, and really it's just not there. I know there's obsession over CoDel etc. and I used to follow the late Dave Taht's evangelising about the issue, but put simply the numbers don't add up - anyways a simpler solution would simply to prioritise ICMP and UDP flows over TCP. Anyways, this is not your imagined crusade against bufferbloat, it's just a pragmatic assessment. I'll leave it at that, rather than deflect and attack, consider applying some emotional intelligence.


The problem of buffer bloat is one of many issues that affect internet users. When I visited China and pings to my VPN in the US jumped from 200ms to 30 seconds depending on the time of day, bufferbloat was severely affecting me. That could only be described as bufferbloat, since the packets were suffering from store and forward overhead to an excessive degree and my pings were able to measure it across times of day.

Historically and likely still in the present day (but not in my household as we use AQM now), whenever one person in a household does a large download, internet latencies shoot up for everyone in the household, which is also bufferbloat. Having to wait hundreds of ms per round trip brings us back to the 56k dialup days and the performance impact on interactive traffic is horrific. It is enough to make VoIP unusable. As others have told you, there can be other issues at the same time, but bufferbloat makes the issues worse. I cannot speak for others on the extent to which they are afflicted by buffer bloat, but adopting AQM had a night and day difference in performance of the internet connection in my house, since I often do big data transfers that previously would slow down basic web browsing for everyone in my house, myself included.

As for your conjecture that extant problems are visible in recent journal publications, journals have a selection bias. The idea that a problem’s existence is indicated by the degree to which people are publishing papers on it in journals is fallacious since the papers need to not just provide something new, but also be interesting to those running the journal (i.e. make them think that the papers would elevate the status if their journal and help them get increased readership, provided that they are not a junk journal that will publish literally anything). On top of that, the work needs to be funded. Bufferbloat, which is largely considered a solved problem and which predominantly affects the less affluent these days, is not something that will get much attention in journals since nobody in academia seeks funding for something that they do not think they can improve or publish.

Finally, I did not use any ad hominem remarks toward you, as my remarks had focused entirely on what you wrote. I did write that any further replies would likely be done to get you to keep talking so I can play my old game of “figure out what is wrong with someone posting bizarre things on the internet”. About 30% of the population is mentally ill and thus when someone is posting bizarre things online, it is often the result of mental illness. Figuring out which mental illness is often the only reason responding to bizarre posts is worthwhile (as it is both an intellectual challenge and a public service). This contradicts your remarks suggesting that there is no point to my replies, to use my words rather than yours. It is not an ad hominem remark to say that I am likely to do this analysis. Posting the results of the analysis would be, but it would be grounded in fact and would likely be done to suggest professional help for X condition, if my amateur analysis identifies a condition that could benefit from professional help. Honestly, I think the world would be a better place if more people who studied psychology (even 1 class like I did) played armchair psychologist when others persist in a pattern of bizarre remarks and refer those who need professional help to trained professionals.


When I visited China and pings to my VPN in the US jumped from 200ms to 30 seconds depending on the time of day, bufferbloat was severely affecting me. That could only be described as bufferbloat, since the packets were suffering from store and forward overhead to an excessive degree and my pings were able to measure it across times of day.

I think you suffer from tunnel vision here, particularly if you ascribe the issue to your ISP which would have magnitudes more capacity than subscriber links, even if oversubscribed. For Bufferbloat to be an issue in that regard they'd have to be a choke point, in which case there's actual seriously problems at that point. I'd expect being China there's a lot more going on anyway with GFW and poor routing.

Bufferbloat, which is largely considered a solved problem and which predominantly affects the less affluent these days, is not something that will get much attention in journals since nobody in academia seeks funding for something that they do not think they can improve or publish.

I feel this is the point you missed when having a conniption; I didn't say bufferbloat didn't exist, I said it was overhyped. I would like you to reflect on these two views and see how they differ significantly, as it clearly went over your head on your crusade to crucify me.

About 30% of the population is mentally ill and thus when someone is posting bizarre things online, it is often the result of mental illness. Figuring out which mental illness is often the only reason responding to bizarre posts is worthwhile

Sure happy to say I suffer mental illness. I have been morbid less than 0.5% of the time in the last 15 years, otherwise it is in remission. What's actually bizarre is thinking there is one type of mental illness that makes people somehow unhinged or can't carry a debate because you simply do not like what they say, it's clear that your one psychology class which you use as a crutch to spout nonsense shows you're clearly out of depth here, you should seriously stop; it's also abundantly clear you use it as a guise to gatekeep and to condescend; if you can't debate or argue in good faith, I would put forth the advice you keep trying to palm off - do not continue posting, disengage.

But if I didn't sway you with above, stop and at least read the HN guidelines; this isn't the other social media platforms you're used to.


I do not think mental illness is the cause of people posting wrong remarks, but when they post remarks that strike me as particularly bizarre, as yours have here, I think consideration of mental illness itself is worthwhile, productive and interesting. I have been very kind from the start, when I informed you that you were making yourself look bad. Being curious about what causes you to persist in such behavior is also a form of kindness, especially when such curiosity leads to a recommendation of professional help, although those who need help are not always willing to get it.

I will point out that you were not interested in debating in good faith, as your sole purpose is to make the point that people should treat buffer bloat as a theoretical issue that does not actually affect people, with the explanation for every incident of buffer bloat always being something else. Everything you wrote has been consistent with that. You even applied a pattern of overanalysis to the discussion that can be used to claim the contrary to anything. Your apparent need to deny the existence of a well known issue is bizarre. You would be much happier right now if you had dropped the subject.

By the way, peering links between the greater China area and the rest of the world are notoriously bad due to limited capacity. China in particular has three tier 1 networks that all refuse to upgrade peering links between each other in a timely manner, and the peering links between them and the rest of the world are often just as bad as the peering links inside of China. In order to get a decent connection there in 2016, I had to forcibly do my own routing through VMs in data centers in Shanghai, Tokyo and others to control the links used, after much trial and error, yet there were still periods where the links were horrendous and surprisingly, the issue was not always exclusive to China according to my testing between VMs on various data centers. I was visiting family for a month, yet needed a decent connection to work remotely, so I spent an entire week non-stop studying the connectivity and experimenting with ways of improving my VPN connectivity. I also had proof that the GFW was not where problems occurred, since my tcpdump pcap files taken at VMs in Shanghai and Tokyo showed the packets traversing the GFW. I also had reproduced similar performance issues using VMs in other countries, such as Japan and Singapore, where there is no GFW, as part of attempts to identify paths over which I could forcibly route my traffic via iptables rules. It is obvious to me that you will continue to deny bufferbloat was a problem and instead blame something else, yet as others have already explained to you, bufferbloat makes problems worse. After a delay of a few hundred ms, my VoIP session did not care if the packets were delivered or not, since at that point, the packets were useless, yet enormous buffers would delay them and other traffic well past a sane expiry instead of dropping it, to the detriment of all trying to use the peering links.


I do not think mental illness is the cause of people posting wrong remarks, but when remarks strike me as particularly bizarre, as yours have here, I think consideration of mental illness itself is worthwhile, productive and interesting. I have been very kind from the start, when I informed you that you were making yourself look bad. Being curious about what causes you to persist in such behavior is also a form of kindness, especially when it leads to a recommendation of professional help, although those who need help are not always willing to get it.

It has nothing to do with the topic, doesn't add substantive discussion to the topic? The bizarre part is you keep bringing up mental illness without actually specifying one. You should just admit to yourself that you use it to skirt the rules, condescend and gatekeep. Please read the HN guidelines and keep that kind of rhetoric to other social platform mediums where that is condoned.

I should point out that you were not interested in debating in good faith, as your sole purpose is to make the point that people should treat buffer bloat as a theoretical issue that does not actually affect people

I've been debating in good faith, but when you repeatedly misinterpret what I'm saying and continue to argue for an imagined argument, I give up.

I should point out that you were not interested in debating in good faith, as your sole purpose is to make the point that people should treat buffer bloat as a theoretical issue that does not actually affect people, with the explanation for every incident of buffer bloat always being something else as if buffer bloat were not a factor. You were applying a pattern of overanalysis to the discussion that can be used to claim the contrary to anything. I really do not understand your apparent need to deny the existence of a well known issue, but it is bizarre in a bad way. You would be much happier right now if you had dropped the subject.

I'm actually fine; I just don't take well to pointless tangents, ad hominems, and curmudgeon behaviour which is really all your doing. It took you several posts to get to the content of what I was asking. As for overanalysis; is it though? A lot of people do not know how to analyse a problem in the first place, ignore variables, and jump to conclusions due to confirmation bias. You're taking this personally as you think I'm underestimating your ability to diagnose the problem, but that's not what I'm getting at and is central to my argument, there is a variety of problems on the Internet and it is hard to actually distill and get an accurate diagnosis. Experiencing latency or jitter is a tale old as time; and I think the overarching argument I've been pointed out is that we've improved Internet infrastructure for the most part that bufferbloat just isn't relevant (as it is hard to manifest and trigger), yet still gets hyped as an issue. You even conceded this point earlier.

It is obvious to me that you will deny bufferbloat was a problem and instead blame the peering links

I didn't deny it, I just said that you don't have enough information to argue its case, especially with the lofty argument that it is occurring at the ISP as opposed to your CPE. Let alone your ISP in China which immediately raises red flags with all the other issues that are at play.

Anyways, I'm kind of done, you're imagining a villain you must slay, resorted to gatekeeping, ad hominems, pointless tangents, etc. I'll leave you to it.


Multiple people have tried to explain buffer bloat to you and you have repeatedly denied it affects people in reality. The discussion with you has been little different from people saying that a number of things are explained by the Earth being round only to be told, by you in this analogy, that we have failed to consider many other things and thus cannot really know that the earth is round.

This is the most bizarre choice of “hill to die on” that I have ever seen.


Nvidia has the Tegra line, but the market is not interested in it outside of game consoles.


Or Qualcomm used their monopoly to keep nvidia out of phones.


Before Switch basically every company made 1-2 Tegra products only to newer use Nvidia again. Tegra was late and bad.


Qualcomm wouldn’t have to try to do that. Tegra was basically a side project for Nvidia that they barely care about till the switch came along.


The launch platform for android tablets was tegra chips. They were also popular in automotive.


Yeah and then NVIDIA got bored and didn’t iterate at the same rate as everyone else.


The quotation is more impactful in the original Latin: Quis custodiet ipsos custodes?


custodes[.]ai would be a great startup name


Actually, Custodes would have nothing to do with abominable intelligence </warhammer 40k>


...What if we called it a "Machine Spirit"?


They did do more than just chemicals for film, but then decided to spin off the business so that they could focus on the film business:

https://en.wikipedia.org/wiki/Eastman_Chemical_Company

One really must admire their determination to throw all of the life boats overboard to go down with the film ship. Every time they have a chance to save themselves if they move away from a single minded focus on film, they discard the opportunity. They just needed to mimic Nikon and Canon to stay in business, but refused to do even that.

The only way I see Kodak having a happy ending would be if Eastman Chemical purchases their former parent company to put it under sane management.


That's way too simplified. There's no "just" mimic Nikon and Canon and every other company which tried that died or pivoted away from the try.


It makes sense to spin off a successful subsidiary into a separate company and bankrupt the unsuccessful parent company. ECC net income is four times Nikon's and one third of Canon's, so it fits right in the middle. Seems like a happy end for Kodak stockholders beyond the sentimental value of continuity of brand.


Survivorship bias. There's no shortage of dead companies who did try to pivot, and, as it turns out, it's harder to become successful in a new field than just saying 'We're trying something new, boys!'


If you want a good example of a site with a theme switcher:

https://www.csszengarden.com/pages/alldesigns/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: