Hacker Newsnew | past | comments | ask | show | jobs | submit | more cageface's commentslogin

Only in the US. Apple Maps is still far behind Google Maps in most of the rest of the world.


It was great when I was visiting China. I only tried it because Google Maps was unavailable (nothing Google worked at all except push notifications, because those come from Apple), and was pleasantly surprised. Every train station and mall had detailed maps.

Apple Maps is also pretty good here in Canada, although I would never trust it to tell me with any accuracy what a given business's hours are, or whether it even still exists. Google is much better at that.


I think the term you're looking for is virtue signaling.


yes, that. it was a typo. Thanks


You can really see this when trying to build apps with Swift & SwiftUI. The language and the framework seem to be optimized for nice terse WWDC demos but both fall apart pretty quickly when you start to do any heavy UI lifting with them. And I think that's starting to bleed into their own native UI now too. The lousy macOS settings app is a good example.

Unfortunately there don't seem to be any good alternatives to Apple. Windows is even worse.


Yes, I was a fairly early SwiftUI guinea pig, when I'd mistakenly assumed it was solid because of how Apple was pushing it, and your "WWDC demo" is spot-on.

The DSL could've been better (while still syncing between code and direct-manipulation GUI painter). And the interaction model seemed like it wasn't to be trusted, and was probably buggy (and others confirmed bugs). The lack of documentation on some entitlements APIs being demoed as launched left me shouting very bad words on multiple days (which is not something I normally do) before I made everything work.

I could feel this, and ended up wrapping all my UI with a carefully hand-implemented hierarchical statechart, so that the app would work in the field for our needs, the first time, and every time. Normally, for consumer-grade work, I would just use the abstractions of the interface toolkit, and not have to formally model it separately.

Don't get me started on what a transparently incompetent load of poo some of the Apple developer Web sites were, for complying with the additional burdens that Apple places on developers, just because it can. Obvious rampant data consistency problems, poor HCI design, and just plain unreliable behavior. I think I heard that at least some of that had been outsourced, to one of those consulting firms that everyone knows isn't up to doing anything competently, but that somehow gets contracts anyway.


All of these modern "declarative" frameworks seem to be optimized for hello world kinds of apps. Jetpack Compose, too.

And SwiftUI is just being too smart sometimes: https://tonsky.me/blog/swiftui/

Not sure about other people, but for me, my UI framework making its own heuristic decisions about how to lay out and style my views is the last thing I want. It robs me of the certainty that my UI will look and work the way I intend. And this is why, as an Android developer, I still build my apps with decade-old tried and true technologies.


Yeah that view builder syntax is a perfect example of optimizing for the wrong thing. It makes for nice short examples but in real apps your compile times explode trying to untangle these crazy generics and the compiler very often just throws up its hands and tells you to figure it out. This means you just start commenting out bits of code until you find by trial and error what it doesn't like.

That this is shipping in the native UI framework for a trillion dollar tech company is astonishing.


Except those technologies are now deprecated and you don't know when they might be removed. Jetpack Compose is now the vendor-favored way to build apps, so best practice is to use that.


I don't care what "best practices" are. Seemingly everyone sticks to these, yet here we are discussing that software quality everywhere throughout the industry has taken a dip.

> Except those technologies are now deprecated and you don't know when they might be removed.

Views and activities and XML layout will never be removed, of that I'm certain. After all, Compose does use views in the end. That's the only way to build UIs that the system itself understands. And, unlike SwiftUI, Compose itself isn't even part of the system, it's a thing you put inside your app.

I don't care about deprecations. Google has discredited itself for me and its abuse of the @Deprecated annotation is one of the reasons. The one thing that's very unfortunate is that all tools unquestionably trust that the person who puts @Deprecated in the code they maintain knows what they're doing, and nothing allows you to selectively un-deprecate specific classes or packages; you can only ignore all deprecations in your class/method/statement.

And, by the way, I also ignore the existence of Kotlin. I still write Java, albeit it's Java 17. The one time I had to deal with Kotlin code (on a hackathon) it felt like I'm coding through molasses.


Their new wifi network selector is laggy as fuck. The old one was perfectly fine. This is just like windows reimplementing basic UIs in their UI-framework-of-the-year.


KDE 6.3 is pretty great these days.


Windows is only worse if you don't consider the freedom of choosing the hardware it runs on and its ability to be modified to run as you see fit.

Apple is really losing the plot because they really need their software to be good to sell their hardware. Microsoft doesn't even have to care that much because there is not a relevant alternative coming out any time soon (as the various Linux failures have shown), but at least you don't have to give them a lot of money (in fact as close to zero as possible if you really want to).


Ironically Linux with KDE is very good for being both pixel perfect, responsive and enjoyable to work with.

What a time to be alive.


"in the beginning was the command line"


I recently came to this realization in a large typescript codebase. It's really important to understand who owns data and who has the right to modify it. Having tools to manage this and make it explicit built into the language is so helpful for code correctness and is especially beneficial for maintaining code you didn't write yourself.


Another great way of handling this if you cannot switch out the language, is to start adopting a more functional approach and also try to keep mutations into one place/less places. So instead of having all the X services/adapter/whatever being able to pass data around that they mutate along the way (like the typical "hiding implementation details in objects/classes"), have all those just do transformation on data and return new data, then have one thing that can mutate things.

Even if you cannot go as extreme as isolating the mutation into just one place, heavily reducing the amount of mutation makes that particular problem a lot easier to handle in larger codebases.


Right this is what I tried to do but unfortunately trying to mark everything immutable in Typescript leads to some very unergonomic type signatures. Hopefully this can improve in the future.


Could you show an example of what you mean? Not sure how not mutating data would lead to more unergonomic type signatures, I'm sure an example would help me understand. Although it wouldn't surprise me TypeScript makes things harder.


You are polluting every variable signature with `readonly`. This also can create cascading effects where making one function accept only readonly variables forces you to declare readonly elsewhere as well. Quite similar, in a way, to Rust.


If I have a complex structure MyStruct that I make recursively readonly it doesn't show up in the IDE as DeepReadOnly<MyStruct> or something like that. It shows up as a huge tree of nested readonly declarations, so the original type is highly obscured.


When I was in graduate school the chances of getting a tenured position weren't anywhere close to 1 in 3. Where are you getting that number?


Roughly speaking, there are 10-20 PhDs for every faculty position. But not every PhD wants a faculty position, even in principle.

Many want to do research in the industry, or in public research labs. Many do a PhD because it opens doors in other careers, such as medicine or education. Some PhDs are hobby projects people do in retirement. Some are side projects for people who want to study something relevant to their main job (those are quite common in social sciences).

Then there are those who actually want a career in the academia. But many of them are not trying seriously, because they restrict their job search to a single city / region / country. The 1 in 3 chance is for those who are flexible enough and committed enough and accept the realities of the academic job market.


Most of my classmates would have been very interested in an academic career if they thought their chances were even one in ten and this was in a top tier program. And they were all totally willing to relocate too.


The best estimate I can find is that about 3.5 million people in the US have a PhD or another research doctorate. That includes non-immigrants who study or work in the US. According to AAUP statistics, there are ~200k full-time equivalent tenured or tenure-track faculty in what they consider doctoral institutions. Such positions are the typical but not the only option for a research career in the academia. The numbers are well within the parameters I used for my estimate.

It's important to understand that in this context, willingness to relocate means willingness to spend your life outside your home country. Even in a large country like the US, there are often structural reasons why universities are not interested in hiring someone like you when you are in the job market.

For example, maybe a field such as ML starts getting popular. Universities respond by hiring new faculty, who in turn hire new PhD students. Almost a decade later, when those students have graduated and are in the job market, the demand may have stablized. Universities already have a plenty of faculty in that field and have little interest in hiring more.

Which means that if you chose a popular field, your chances of getting hired may be below the average. If you want to stay in the academia, your best bet may be moving to a country that didn't experience a similar hiring frenzy and is now trying to catch up.


Yup. It's often much worse.

Although that person may be lumping in the research professorships and various ass. dean positions.


You could add mine to that list: https://www.plastaq.com/minimoon


I'd be curious to try it but I don't understand from the site whether it is mobile only. It claims that there is a utility to sync with desktop but then it doesn't run on desktop?


It's both desktop and mobile. If you're browsing on mobile it will show you screenshots of the mobile app.

The sync app is separate, free app just used for serving files to the mobile app for syncing though.


You can get all the software quality you want if you're willing to pay for it.

Users have now been taught that $10 is a lot to pay for an app and the result is a lot of buggy, slow software.


The problem is that we generally don't have anyone a good track record of what good software is valued at, it USED to around 300-500$ and with companies being incentivized to go subscription based who knows intuitively what that is.


We’ve also taught users that extremely expensive software like SAP and Blackboard are also crap (at least from an end users perspective)


This is the inevitable result of decades of feature creep in software that tries to be too general and meet every enterprise edge case.


Yep, but the end user doesn’t care.

Those big software packages are sold to admins anyway.


I work in a two man team making software that is 500-1000 times faster than the competition and we sell it at ~40% of their price. Granted, this is in a niche market but I would be very careful in stating price/costs are the entire picture here. Most developers, even if you suddenly made performance a priority (not even top priority, mind you), wouldn't know how to actually achieve much of anything.

Realistically only about 5% or so of my former colleagues could take on performance as a priority even if you said to them that they shouldn't do outright wasteful things and just try to minimize slowness instead of optimizing, because their entire careers have been spent optimizing only for programmer satisfaction (and no, this does not intrinsically mean "simplicity", they are orthogonal).


If you really think what you're doing can be easily generalized then you're leaving a lot of money on the table by not doing it.


Generalized? Probably not. Replicated to a much higher degree than people think? I think so. It wouldn't matter much to me personally outside of the ability to get hired because I have no desire to create some massive product and make that my business. My business is producing better, faster and cheaper software for people and turning that into opportunity to pay for things I want to do, like making games.

Disclaimer: Take everything below with a grain of salt. I think you're right that if this was an easy road to take, people would already be doing it in droves... But, I also think that most people lack the skill and wisdom to execute the below, which is perhaps a cynical view of things, but it's the one I have nonetheless.

The reason I think most software can be faster, better and cheaper is this:

1. Most software is developed with too many people, this is a massive drag on productivity and costs.

2. Developers are generally overpaid and US developers especially so, this compounds for terrible results with #1. This is particularly bad since most developers are really only gluing libraries together and are unable to write those libraries themselves, because they've never had to actually write their own things.

3. Most software is developed as if dependencies have no cost, when they present some of the highest cost-over-time vectors. Dependencies are technical debt more than anything else; you're borrowing against the future understanding of your system which impacts development speed, maintenance and understanding the characteristics of your final product. Not only that; many dependencies are so cumbersome that the work associated with integrating them even in the beginning is actually more costly than simply making the thing you needed.

4. Most software is built with ideas that are detrimental to understanding, development speed and maintenance: Both OOP and FP are overused and treated as guiding lights in development, which leads to poor results over time. I say this as someone who has worked with "functional" languages and FP as a style for over 10 years. Just about the only useful part of the FP playbook is to consider making functions pure because that's nice. FP as a style is not as bad for understanding as classic OOP is, mind you, but it's generally terrible for performance and even the best of the best environments for it are awful in terms of performance. FP code of the more "extreme" kind (Haskell, my dearest) is also (unfortunately) sometimes very detrimental to understanding.


I think I'd need a lot more than one claim with no evidence that this is true but I appreciate the thoughtful response all the same.

Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.


I don't think either of us needs to convince the other of anything, I'm mostly outlining this here so that maybe someone is actually inspired to think critically about some of these things, especially the workforce bits and the cost of dependencies.

> Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.

This seems like a generous take to me, but I suppose it's usually better for the soul to assume the best when it comes to people. Companies with explicit performance requirements will of course self-select for people capable of actually considering performance (or die), but I don't take that to mean that the rest of the software development workforce is actually able to, because I've seen so, so many examples of the exact opposite.


I guess I just see so many of these "the whole industry except for me is doing it wrong" kinds of posts but almost never see anybody really back it up with anything concrete that I've grown extremely skeptical.


The next update to my player is going to support Opus. It would be nice if it was better supported by the OS though.

https://www.plastaq.com/minimoon


I think SwiftUI is to blame. In order to make a nice terse DSL for WWDC talks they really contorted the language design.


It takes a minute to build from scratch or to update when running "build_runner watch"? My app is over 40k lines and watch updates almost instantaneously.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: