As someone who grew up in Dubai, the tourism industry everywhere breaks the soul of the place. Dubai, especially the places where people actually live and set up a livelihood, is a place like any other.
What a terrible day to be literate and be able to read into the ugly belly of self-righteousness disguised as morality.
I don't think the author was talking shit about Dubai, she was sticking up for the people who are being exploited there.
Obviously there's humanity and real lives everywhere, she's just advocating for those lives to be valued. With, ya know, human rights protected by the government.
The problem is not factuality. The problem is assuming that the entire World should now and forever be judged through the moral lens of post-war America.
Dubai seems like the peak of inequality and ostentatious luxury even to those of us who don't live in America (I assume you mean "the US", not the Americas).
There are genuinely bad places in the world even factoring out "American exceptionalism/egocentrism". Dubai seems to be one of them.
I'm going to assume based on the authors name, and the sources name that this is all very "western". We have gotten very good at exporting all the misery elsewhere while we medicate and entertain ourselves and pretend that the savagery and the suffering do not exist.
The take here is very much Brave New World, going to visit the savages, and complaining about it. And then contrasting it with the "normal" sterilized life that the author leads.
It is neither self righteousness or morality: It is an abject lack of self awareness and of sheltered existence.
Other commenters are wrong. Live-cell can be done with older single-molecule localization microscopy using techniques like PAINT. The fluorophore is usually strategically added in a way that binding-unbinding events cause excitation. Algorithms can then infer identity of single fluorophores based on their excitation pattern/strength and can predict whether it's two distinct fluorophore molecules or the same molecule moving over multiple frames of image acquisition.
1. Stephen Hell has been theorizing about how to do super-res microscopy since the mid-90s, so the article saying it was sci-fi "20 years ago" is off by about 10 years.
2. Stephen Hell has recently given the world another new technique, MINFLUX, which seems to be his best gift to super-res researchers so far. :)
If you ask most neuroscientists they’d say the same. Only a small subset of us would cite the literature that the brain’s caloric neuronal activity is ~10-15% unaccounted for by the amount of glucose neurons have access to. It’s a niche within a niche. And debated by the majority.
Yup. An accomplished scientist friend of mine looked up a topic in which he’s an expert and was deeply unimpressed - outdated, inaccurate, incomplete, misleading info (perhaps because much relevant research is paywalled). LLMs are amazing but not all-knowing.
I used to think software developer would be the final thing that gets automated because we'd be the ones making specific new AI for each task, but these days I think it's more likely to be spy craft that's the "final profession", because there's nothing to learn from except trial and error — all the info (that isn't leaked) is siloed even within any given intelligence organisation, so AI in that field not only don't get (much) public info about the real skills, they also don't even get classified info. What they will get instead is all the fiction written about spies, which will be about as helpful to real spies as Mr. Scott's Guide to the Enterprise is to the design and construction of the Boeing Starliner.
Bingo! A file sent out creates a specific paper trail and accountability for all parties. If I want to make sure there’s a record of me sending documentation to someone, I’m not relying on giving them write permission to the critical piece of text… this isn’t even about distrust but about clarity for all parties in the file transfer.
Access logs + maintaining backups + version control + relying on the hope that no one’s cat runs on their backspace key during the session where access control says they logged in … that’s the stack you’re recommending vs sharing a file in an email and saying “have a nice day”.
Never. The Apple bet, the North Star, is that personal computing is both the present and the future. The minute an exception gets carved out, like “personal computing but not in Europe” then Apple enters a death spiral. They’ll deal with each blow that comes their way because it will come for everyone else in the market too, but they’ll still be in the lead.
I have disorganized thoughts about this, but it's not just a debate about vertical isolation vs not.
1. The size of Apple/Alphabet/Samsung makes it difficult to enter the market (see: factories having ridiculous MOQs for small-batch phone manufacturing), pushing everyone else out.
2. The size of the smartphone market makes it impossible to not have to deal with one of the above companies for certification, market penetrance or such. This makes them kingmakers. If a company somehow manages to become Facebook, Netflix, or Amazon, then the phone companies slide them a secret deal under the table. Everyone else gets a market-limiting set of terms that makes sure "tech" stays one of the "top" industries.
Combined, with no entry allowed, and with forces exerted outwards, we see broad social structures orienting /around/ how we use our phones, rather than the other way around, and that includes ad-monetized-absolutely-everything.
Phones and social media, today, are where TVs and broadcasts were in the 1950s/60s. Ubiquity and centralizing forces. If someone told us in the 1950s a TV manufacturer was exerting pressure on our forms of information distribution and was choosing which voices get a seat at the table, we'd rightly call that archaic and wonder why people would accept a technology provider as a market-shaping force. But today we accept it nonetheless. I refuse to believe the argument that the world's largest company can't figure out how to build a secure pipeline without making plenty of my decisions on my behalf...
Exactly. All the free market logic assumes that barriers to entry are low. They are incredibly high and the market is naturally prone to converging on a single solution. There's basically room for two smartphone ecosystems. Microsoft/Nokia couldn't sustain a third. Android-adjacent things like Amazon Fire and Tizen have little market share.
> factories having ridiculous MOQs for small-batch phone manufacturing
Ironically in the contract manufacturing area the market is actually efficient. Small batches just cost more as an intrinsic fact about manufacturing. I guarantee you could get a quote for any quantity of manufacturing above 1, you just wouldn't like it.
Laws need to be revised to make it easier to remix off the shelf components.
I argue, default compulsory license fees should be a feature of copyright and patent. A 'reasonable' cap to the maximum it costs to reuse an existing device / idea. (Also that it should be a LOT tougher to patent things, maybe 1 patentable thing per expert examiner's work week, which would be the cost of filing for a patent. That only individuals should be able to own a patent. That companies could create 'prior art' with academic detail releases.)
I don't think it's so much patent or copyright as iPhone sits on a huge stack of technology which is proprietary by contract as well. It's very, very vertically integrated.
New Android startups appear now and then. The sort of thing that's achievable with a few tens of millions of dollars of funding. But Android as a whole represents a huge pile of work .. sitting on top of Google Play Services and the App Store, as we can see by the relative non-success of Amazon Fire.
(I was actually involved in the development of a phone-like handset device that was built around phone SOMs from Sierra Wireless. The first minimum order for assembly was ten units, scaling to a thousand after alpha test.)
Remixing off the shelf isn't really the issue if you want to compete, it's the fact that you wont even be able to buy the competitive components at all because they wont sell you them even if you ask and have the money.
That’s covered by the compulsory license part of the comment. Say Apple has a patent on the iPhone, Apple must license the iPhone itself to me at a reasonable fee. Then I can go sell jPhones all day to compete.
I don't think you could compete with the exact same product... the advantage of scale would still give Apple more than sufficient edge.
However if you happened to like E.G. Apple's screen, or camera, or another component that was better than what else could be selected it could be part of a design which competed on other merits. E.G. maybe the touch digitizer is just that much better, so it might make sense on some models of Android or some Libre phone more closely based on Linux or BSD. Or some company that makes an iPhone like device but to GOV spec standards for a given country. (In my mind, I'm thinking US Gov, but IP laws tend to be International too, so maybe Germany wants it's own secure phone.)
Who are you demanding from if there's no incentive to invest? Either the option is there, in which case the incentive clearly must have been there for someone, or it's not, in which case the incentive is the same for wanting to have it in the first place.
Hmm. Depends what you're referring to - iPhone is indeed a closed system, by contractual arrangement, but Android isn't, and it absolutely is feasible for third party Android devices to exist. Occasionally someone does a new phone startup.
Not especially a matter of patent, just good old fashioned contract exclusivity.
> the market is naturally prone to converging on a single solution
Not only that it is "naturally prone" to it (with thinks like bulk efficiencies) but also that it is economically prone to it. A free market with no monopolies drives profit towards zero. No company wants this so the logical response is to become a monopoly (or as close as possible) by putting up barriers to entry and competition.
Note: There's room for more smartphone ecosystems, but not mainstream ones. There are a few nonmainstream phones out there, from Linux phones (Pine, Librem, MNT I think now?) to more openish Android phones (Fairphone) to completely different platforms (that I'm pretty sure exist but I don't remember any of).
> If someone told us in the 1950s a TV manufacturer was exerting pressure on our forms of information distribution and was choosing which voices get a seat at the table, we'd rightly call that archaic and wonder why people would accept a technology provider as a market-shaping force. But today we accept it nonetheless.
A smartphone from Google or Apple is also pretty much required for certain government apps, banking/financial services, and so forth. I wouldn't call it a stretch to say that in the future it would be mandatory to have these duopoly controlled devices on your person at all times, like how you need to carry an ID card.
Many of those apps don't work on rooted phones or custom ROMs without workarounds and doing so is a TOS violation in many cases as well. Also imagine what it would be like if your Google or Apple account got banned by accident with no human support to sort it out.
That's an excellent point. I use Android LineageOS with no google apps. The amount of bullshit that I, a literal computer science PhD, have to put up with to somewhat avoid the more pernicious parts of the monopoly, is insane. Critical and even mandatory parts of my life (banking, government services) require me to engage with google in one way or another.
I actually put up with Lineage for many years before it got so bad that I had to switch to an iPhone. Before 2020 many apps still worked fine with it. All my computers are Linux and I self host everything, but I just couldn't risk an account lockout or a broken bank app.
Honestly I wish there was a legal requirement for those services to provide full access via a relatively open platform (like a web site), not a mobile app.
The funny thing is, I think either goals or a constraint are a tool that should serve the user. Constraints that don't automatically allow the user to achieve goals they would have otherwise accomplished, and that are meaningful and important to them, are useless constraints.
I think figuring out the constraints one likes to work with can act as a great filter once someone knows what kind of success, goals, values and life they want to inhabit. Otherwise, it's as arbitrary as goal setting.
For me, I parroted other people's cool-sounding goals for a lot of my life, achieving varying degrees of success and happiness. Only in retrospect can I look at my favourite success and failure stories and consider which constraints, if I held them earlier, would have helped me narrow down to those favourite storylines from the get-go. Those constraints, I keep near and dear to my heart and attention in my daily life.
I don't think there's a way to set a meaningful constraint before practicing setting goals first. Walk before you run, etc. etc.
Not if (a) it misses a line of research has been refuted 1-2 years ago, (b) the experiments at recommends (RNA-Seq) are a limited resource that requires a whole lab to be setup to efficiently act based upon it, and (c) the result of the work is genetic upregulation of a gene, which could mean just about anything.
Genetic regulation can at best let us know _involvement_ of a gene, but nothing about why. Some examples of why a gene might be involved: it's a compensation mechanism (good!), it modulates the timing of the actual critical processes (discovery worthy but treatment path neutral), it is causative of a disease (treatment potential found) etc...
We don't need pipelines for faster scientific thinking ... especially if the result is experts will have to re-validate each finding. Most experts are anyway truly limited by access to models or access to materials. I certainly don't have a shortage of "good" ideas, and no machine will convince me they're wrong without doing the actual experiments. ;)
This is, I think, what I've been struggling to get across to people: while some domains have problems that you can test entirely in code, there are a lot more where the bottleneck is too resource-conatrained in the physical world to have an experiment-free researcher have any value.
There's practically negative utility for detecting archeological sites in South America, for example: we already know about far more than we could hope to excavate. The ideas aren't the bottleneck.
There's always been an element of this in AI: RL is amazing if you have some way to get ground truth for your problem, and a giant headache if you don't. And so on. But I seem to have trouble convincing people that sometimes the digital is insufficient.
This is a great framing - would you please expound on it a bit. Software is almost exclusively gated by the "thinking" step, except for very large language models, so it would be helpful to understand the gates ("access to models or access to materials") in more detail.
@OP: I'm wondering if more than just sorting, whether filtering could be added? I would want to find both highly rated books with high numbers of reviews.
I am going to look into building out a more extensive filtering system where you could combine filters. Trying to decide on a balance of complexity and ease of use.
My personal use I aim to solve is getting accurate lists of the top rated books for a given genre and perhaps before a given year. I've been listening to lots of audio books recently, yet I only have so much time so I am aiming to only listen to the best.
What a terrible day to be literate and be able to read into the ugly belly of self-righteousness disguised as morality.
reply