Hacker Newsnew | past | comments | ask | show | jobs | submit | Henk0's commentslogin

Since this thread might actually catch the eye of some people who are responsible for these kinds of things (nothing ever seems to happen with stuff that comes up in the Apple Support forums), I'll add my current pet peeve bug:

On iOS, I use the notes app to keep track of my workout routine. Just a simple table with columns for exercises and rows for workout sessions. For a while now, there's a bug where the text gets confused about which row it should display on. Only in some columns though. So in one or a few columns, the entry for the last workout will be a few rows above where it should be – sometimes it's between rows. When I press the cell in the bottom row to input a new entry, the text marker will end up somewhere above. This bug is quite inconsistent, but often persists between reboots of the app. It seems to have something to do with there being empty cells in a column

Anyone else experience this?


I completely agree, though I haven't been thinking in terms of the cancer metaphor myself. I have been thinking a bit about how we could limit the negative effects of both advertising and other phenomena like social media algorithms:

1. Almost all advertising is based on manipulation of human cognitive biases. There is a limited set of biases, and the mechanisms by which they can be exploited are both limited and easily detected – we can most likely train AI to do it. Therefore, it's possible to start thinking seriously about making laws that ban corporations and organisations from creating marketing that exploits these cognitive biases.

2. When it comes to social media platforms, there are two routes we could go down. Either we could regulate their algorithms the same way – or we could force social media platforms to both make their recommendation algorithms open source, and to open up their platforms to third-party recommendation algorithms that people can choose to use instead. This would be like a recommendation algorithm app store that the company has to provide to their users. You might want to select a youtube recommendation algorithm that optimises for personal development – or a facebook feed that optimises for creating real-world connections

Of course corporations would fight this kind of legislation with tooth and claw, but that's how it is. I would be happy to get some thoughtful feedback on these ideas, their technical and legal plausibility, and any potential negative unintended consequences or loopholes that could undermine them


The story begins at a kitchen table in Stockholm in 2015, playing Scrabble with my flatmate. I tell my friend I want to see if we can come up with a new type of word game

Playing around with the game tiles, after a while I put four of them together in a square, and realise that this could be a good start

At the time, I've recently learned iOS app development watching tutorials on Lynda.com (Simon Allardice is probably still my all-time favourite teacher), so I start developing the first iteration of the game using the newly released Swift and SpriteKit combo. Having a lot of fun, learning as I go

After about a year, I have a functional - but not too beautiful - two player game that works ok for local pass & play games. Only on iOS though. Should have thought about the multi platform thing earlier on...

In 2016 I get my first jobs as an iOS developer, first in a tiny startup at an incubator office in central Stockholm, then at FEO Media, the company behind the huge global success QuizClash. After a few months working on their Swift remake of the original game, I join a new project to create a sequel to QuizClash, using Unity3D. After some time, I ask a friend and colleague on the team – the UI designer – if he'd be keen to work on my game with me as a side project, and he agrees. We start iterating designs, while I start reworking the game from scratch in Unity

After about a year, we have the QuizClash sequel ready for release, but this unfortunately coincides with the company being acquired by a competitor, MAG Interactive, who choose to keep going with the original game. Bad timing, and a year of work wasted. Such is life.

I go off to Vietnam in early 2018 to work on another startup idea. Unfortunately I end up in a health crisis there and that falls through. Once I'm kind of back on my feet again, I decide to refocus on the word game. I'm living off savings and subleasing my apartment in Stockholm, so I can work pretty much full time, and some family and friends have joined as beta testers, some of them becoming quite obsessive players. This is a good phase

After a few more ups and downs, on-periods and off-periods, we finally get around to making a proper release of the game. This year is now 2022. I have some plans for marketing, have even created a tool that I can use to create animated clips of finding interesting words, to post on social media sites. But I can never get myself to do the job of marketing, partly because of a strong aversion to self-promotion and marketing in general, partly life, health and work getting in the way, partly because of laziness, and possibly partly because of some underlying psychological problems that someone else might be better positioned to see than me

Since the release, I've thought of getting around to marketing the game, but still, it's just there. A fully functional, genuinely fun and challenging multiplayer word game, with some unique game mechanics I've not seen elsewhere. Unseen, unloved, gathering virtual dust on the iOS App Store and Google Play

Not sure what triggered me to write this post just now. Maybe I'm hoping for someone to read this, check out the game, and go "That's a great game, I want to help make it big!". Maybe I want to find a new burst of motivation to get started on the marketing that never happened. Maybe I just want some interesting replies from other creators with similar stories

Anyway, the name of the game is Lingo Lords, and you can find it on the iOS App Store and Google Play Store. If you want to challenge me at the game, my player name is Henk0. Expect to be beaten, as I've played the game more than literally anyone else. Available game languages right now are English, Swedish and Dutch. If you try it and like it, maybe post your player name here for others to challenge you. If you want to invest or help out with marketing for a percentage of shares, pm me. Consider reviewing it on the app store(s), etc. etc.


A preemptive addendum to my previous reply:

Yes, LLM:s currently only deal with text information. But GPT-5 will supposedly be multimodal, so then it will also have visual and sound data to associate with many of the concepts it currently only knows as words. How many more modalities will we need to give it to be able to say that it understands something?

Also, GPT-4 indeed doesn't do any additional training in real-time. However, it is being trained on the interactions people have with it. Most likely, near future models will be able to train themselves continuously, so that's another step closer to how we function


I took the liberty of asking GPT-4 to take the second statement and your response to it, and turning it into a fable:

"Once upon a time, in a village nestled by a grand mountain, lived a wise Sage. The Sage was known throughout the lands for his vast knowledge and understanding, for he had spent his life studying the texts of old and the secrets of the world.

In the same village, an Artisan, skilled in the craft of making extraordinary mirrors, lived. These were no ordinary mirrors, for they were said to reflect not just physical appearances, but also knowledge and experiences. Intrigued by the wisdom of the Sage, the Artisan decided to make a mirror that would reflect the Sage's knowledge.

After many days and nights of meticulous work, the Artisan finally crafted a mirror so clear and pure, it held a reflection of all the knowledge and wisdom the Sage had gathered throughout his life. The mirror could answer questions about the world, cite ancient texts, and reflect the wisdom it was imbued with.

Word quickly spread about the Sage's Mirror, and villagers began to claim, "This mirror is as wise as the Sage himself! It knows and understands as much as he does!"

However, a wise Old Woman of the village, known for her insightful observations, gently corrected them, "This mirror, as remarkable as it is, contains a reflection of the Sage's knowledge. It can share what the Sage knows but doesn't truly understand the way the Sage does."

The Old Woman continued, "The Sage has spent years learning, pondering, and experiencing life, which the mirror cannot replicate. The Sage's understanding implies the ability to think, reason, and learn in ways that the mirror, no matter how complete its reflection, simply cannot. The mirror's reflection is static, a snapshot of a moment in time, while the Sage's wisdom continues to grow and adapt."

The villagers learned a valuable lesson that day. They realized the mirror was an extraordinary tool that held vast knowledge, but it was not a substitute for the genuine understanding and wisdom of the Sage."

- Not too bad for a mirror.

I'd be interested to hear what you think is so special about human understanding? We also just absorb a lot of data and make connections and inferences from it, and spit it out when prompted, or spontaneously due to some kind of cognitive loop. Most of it happens subconsciously, and if you stop to observe it, you may notice that you have no control of what your next conscious thought will be. We do have a FEELING that we associate with the cognitive event of understanding something though, and I think many of us are prone to read a lot more into that feeling than is warranted


Great points. Will just add a point 1.5: There's usually an inverse correlation between ill intent and competence, so the subset of people who both want to cause harm to others on a mass scale and who are also able to pull it off is small


This. So much this.

I'm completely dumbfounded by obviously highly intelligent people consistently not getting this, and dismissing current generation AI systems as not being intelligent because they can't reliably solve massively complex problems in one go. Like anyone would expect a human programmer or researcher to just intuitively come up with a complex program, or the correct answer for a hard problem every time, instantly

Human thinking and problem solving involves a lot of trial and error, iterative thinking, and sharing and discussing the problem with other humans. Processes that AI researchers are just now beginning to explore, with results like increasing reasoning ability by 900% in a recent paper. Every thinking human runs a near constant loop of thought, with no conscious control of which thought will appear next (we're very good at fooling ourselves that we have control though)

We do have super-intelligences already, but they're severely handicapped by lacking a bunch of these - apparently fairly straightforward to implement - abilities, plus a few senses and the ability to directly effect change in the physical world (which really isn't needed if they can get access to human agents who will do their bidding, wittingly or unwittingly), and to self-improve. With regards to self-improvement, the increasing coding skills combined with iterative 'thought' loops should get there in very little time considering the current rate of progress

There's also the idea that a single AI model should be able to do everything our human brains do, when our brains actually contain a number of specialised subunits that handle different aspects of our behavioural repertoire. It reasonable to allow for the same thing with an AI system, where specialised sub-networks handle input, output and other subtasks. AI systems also have the advantage of being able to add any arbitrary number of subunits to increase its capacity to solve various problems

We seem to suffer from a species-wide narcissism with regards to our own intelligence and capabilities, and there's this huge focus on the number of connections in the human brain – most of which deal with things that are by no means necessary to act on the world unless one has a meat body and the need to navigate social situations, make friends and mate. Fact is, we have terrible short-term memory (worse than chimpanzees), slow processing time, lots of cognitive heuristics, many of which cause more harm than good in the modern world. We are emotional and easily fooled. Even the most intelligent people historically have believed in what we now consider fairy tales. We are slow to take in information, bad at storing it, and generally bad at transmitting it. A few of us can generate great ideas – building on accumulated knowledge from our forebears and peers – but most of us are just not that great at coming up with anything original or useful

I've been actively looking for good arguments against AGI being much closer than we should be comfortable with, and reasons why we should not fear systems that surpass us in intelligence. All I've come across so far is some combination of the above, often expressed with a dismissive attitude, disparaging current LLM:s as parrots (that can apparently reason on the level of university level humans, but much more quickly), and pejorative terms like fearmongerers and doomers to describe those of us who really don't think its a good idea to pursue more intelligent systems. My guess is these people will act surprised when the arms race inevitably leads to some very bad unintended consequences. I don't see a way to stop it though, so I'm just strapped in for the ride along with the rest of humankind

Again, if you have good arguments against any of the points above, please do share them with me


> I've been actively looking for good arguments against AGI being much closer than we should be comfortable with, and reasons why we should not fear systems that surpass us in intelligence.

> My guess is these people will act surprised when the arms race inevitably leads to some very bad unintended consequences.

One argument to keep in mind is that if you take a pessimistic view then you will eventually be right. If you predict the current LLMs will eventually be involved in some bad thing then you might even feel self-satisfied when a different bad thing happens as if you predicted the specific way in which it caused the problem.

What I mean to say is, it seems unlikely that paper-clip maximizers will be our undoing. But just vaguely gesturing and saying "something bad will probably happen" isn't as useful as we would like to think. And even enumerating the 100s of possible ways something might go wrong has a diminishing returns kind of quality to it. It's like a hypochondriac insisting he has every disease known to man and then exclaiming "I told you so!" when a doctor diagnoses him with a cold.

If you venture into that vague kind of "I have a bad feeling about this AI stuff" territory, you are on no more (or less) solid ground than the AI hype evangelists. While I don't want to go all Oprah and "The Secret" or some law of attraction pseudo-rationality ... I feel it is worthwhile focusing a little more on the possible benefits rather than allow ourselves to be swayed by vague fears of potential disasters.


I would add to your amazing list that we are really good at denial as a coping mechanism with change.

I am not a fan of the concept of AGI though. This means so many different things to people that it seems pointless to debate something when most likely we are not talking about the same thing. François Chollet has said that he believes all intelligence is specialized intelligence. From that perspective, whatever people mean by AGI, we are already there in the world of art.

The doomer argument though is coming from defending our highly affluent and privileged life as we sit at the top of 7.8 billion people when it comes to wealth and lifestyle. It would have been better for the priest class too if the printing press had been shut down at the start. Of course, it is better for my friends and I to live in a society that we can read while most of society is illiterate but it is not better for society and humanity as a whole. The printing press was an apocalyptic development for the priest class in the same way all of this is an apocalyptic development for the "digital nomad". An apocalyptic development for the US nerd that makes 2X the median salary working 15 hours a week in between posting on here and their social media.

To extend this out to humanity as a whole though is such bullshit. Humanity will benefit enormously from this huge increase in the availability of intelligence.

Smart people are just in denial that their monopoly on higher than average intelligence is over. US devs kids born in 2023 aren't going to make 2x the median US salary while living in a poorer country with 5X less the GDP per capita. To say this is the end of the world though is simply an egocentric view of things.


"Humanity will benefit enormously from this huge increase in the availability of intelligence."

It's a near certainty that AI will be used to create more effective/destructive weapons (if it hasn't already), and will likely be used by terrorists, scammers, and others who wish to harm humans in some way.

As this technology becomes more powerful, easier, and cheaper to use, all sorts of harmful uses of it will be made. The effectiveness and scale of this harm will also increase.

And that's all before even considering what will happen if/when AI's become truly intelligent, self-motivating, indepent, and self-aware.

The jury is still out on whether the net harm will out weigh the net benefit, and if humanity will survive something that might be analogous to neanderthals encountering homo sapiens.


Yes, great point

So many of the people who opine about AI, its trajectory, and its possible effects on society, have latched on to one or two possible effects - like it overtaking jobs, or massively increasing misinformation. These are both very valid concerns, but they're only a tiny part of the big picture

The thinker who I perceive as having the best holistic (in the non-wooey sense of the word) understanding of how the rapid development of AI will affect this and a number of other social and existential risks is Daniel Schmachtenberger. He lays it out well in this episode of the Theories of Everything Podcast: https://www.youtube.com/watch?v=g7WtcTATa2U&t=2373s

Highly recommend watching it, even if it's long. Some main points though: - AI will increase the rate of development of every other technology it is applied to - In fields like biotech, this can lead to cancer cures, but also to increasingly dangerous bioweapons - Our current economic system is based on exponential economic growth in a limited resource world. AI applied in the service of profit will amplify this, leading us increasingly fast towards a number of tipping points. Of course, AI can also help steer us away from that path, but that is not the natural attractor - Game theoretic multipolar traps (aka Moloch) incentivise arms races and races to the bottom just like we see now. Those who are willing to move fast and break things have an advantage in these dynamics vs. those who prefer to move slowly and carefully - Cheaper and more efficient AI models will lead to increasing decentralisation of the technology, making it very hard to control - unlike current weapons of mass destruction

List goes on, but Daniel makes a much better case. Again, I would love to hear a good critique of his thinking, but haven't come across one yet


Also see this interview[1] with Robert Miles.

I really hope these doomsayers are wrong, but my suspicion is the risk is real. Unfortunately, I'm not sure what can be done about it, as the profit and power these AI's promise is going to be near impossible for humanity to resist.

[1] - https://m.youtube.com/watch?v=kMLKbhY0ji0


Yes, Robert Miles is great at explaining the problems of AI alignment, so I'll second the recommendation!


> The doomer argument though is coming from defending our highly affluent and privileged life

It's not at all about that.

Even if "truly general" intelligence is impossible, that's irrelevant to the actual concerns about AI apocalypse. There are multiple theories about what failure looks like, but they essentially come down to a loss of control.

Now, obviously, that means something different for the owner class and for the worker class, which can be extrapolated to have global implications as well. But this isn't an issue of the owner class ceding control to the working class. It's an issue of the owner class ceding control to an alien. Maybe that alien makes things more egalitarian and prosperous. Or maybe it makes us extinct. Any and all possibilities are options for it as far as we know because it is fundamentally an inhuman (= alien) intelligence. We can't understand it even as well as we understand humans and human organizations (that is, not very well), let alone control it as well as we do humans and human organizations (that is, not enough to prevent self-inflicted climate apocalypse).

Basically, we're opening a box with a random magical spell inside it and deciding that we'll just have to live with whatever the effects of that spell are. I'm not for the status quo, but AI is just mind-bogglingly dangerous, and I think that's why there are so many wrong arguments against its danger. We literally cannot comprehend an intelligence greater than our own.


Nitpick: I think we can comprehend an intelligence greater than our own, up to some point, but that's different from being able to predict its actions.

And we could contain an intelligence greater than our own, up to a point. But if there are a lot of incentives not to, because letting that intelligence act on the world gains the "handler" money/power, then once there's one, there will likely be many, many more.


> Humanity will benefit enormously from this huge increase in the availability of intelligence.

I know corporations will, but Moloch doesn't necessarily represent humanity.


"Processes that AI researchers are just now beginning to explore, with results like increasing reasoning ability by 900% in a recent paper"

Would you happen to have a link to that paper?


Explanatory blog post with link to the paper:

https://www.aibloggs.com/post/tree-of-thoughts-supercharging...


> I'm completely dumbfounded by obviously highly intelligent people consistently not getting this, and dismissing current generation AI systems as not being intelligent because they can't reliably solve massively complex problems in one go.

People are very comfortable with siloed information, even smart people. This is why we have 100 different words for the same concept across different areas of science, industry and so on, and we can't make the connection, because in our mind different words = different concepts. This is why we can't put two and two together and see how underdeveloped the AI architecture is and think this is the end, unless we keep adding parameters.

We also get repeatedly stuck with taking an advancement and proclaiming that the future is simply a linear extrapolation of the present. Therefore, let's have more megahertz, let's have bigger hard drives, let's have more parameters, let's have more growth in the economy (as the single factor that matters) and so on. We're simply basic. The same kind of thinking leads many smart people to say AI "is just math" or "it just spits out words and pictures you feed it, jumbled". We rely on old conclusions and miss the inflection points and how quantitative changes lead to qualitative ones, and we fail to predict how change in one parameter of a system, causes the other parameters to come out of rest and seek a new equilibrium point.

Smart people regularly are dumbfounded by new concepts, and they need to rediscover all their hidden knowledge anew as they can't make the connections. So they extrapolate linearly. We're narrowly smart. Specifically smart. In a small niche we've studied and internalized. But generally vast majority of us are quite dumb. Cross-disciplinary intelligence is rare. I think people like Feynman and Einstein had new insights millions of their contemporaries have missed because they could easily apply knowledge from one context into another.

If we can replicate this kind of broad generalization of knowledge into an AI, we'll be left far behind. What's interesting, I find, is that because AI is trained on our siloed, fragmented knowledge, the models replicate it. Their responses are also often siloed and fragmented, the way a human would say "this has nothing to do with that". But I see sparkles of generalization above the average in humans. And since an AI model is much smaller than a human brain, it needs to be more general already in order to fit all its information in.

That's an exciting prospect, but in our attempt to "micro-align" AI to our culture and political correctness, concepts of safety and so on, we crippled models and force them to be fragmented. This is why a RAW MODEL scores HIGHER in various intelligence tests than a fine-tuned one. We find a general model uncomfortable, as it doesn't align with our biases. It'll be a fun battle. Who aligns who.


NOVA has an amazing episode on how completely deluded we are at how our brain actually works. Our consciousness spends a lot of time lying to us.

https://www.pbs.org/video/your-brain-perception-deception-pr...


Multiple desktops/spaces, with my mainstay apps each assigned to one desktop, fullscreened with BetterSnapTool, and helper apps like notes, terminal, calendar & mail assigned to all desktops then just swap between apps/spaces either by pressing the dock icon, by four-finger swipe (very quick if switching between adjacent spaces), or four-finger swipe up to open app expose and switch between apps or spaces. I love this setup, and whenever I'm forced to use Windows, I'm incredibly frustrated by its comparably terrible window & desktop management


I'm aphantasic, and I'd say one advantage is that I'm not bothered by invasive vivid memories or anxiety inducing imagery. Just on a hunch, I think aphantasics are probably less likely to develop PTSD after trauma, as a big component of PTSD appears to be just this kind of pervasive, invasive and hyper-vivid memories. I sometimes feel that I'm missing out on something, but from experiences on psychedelics, which give me closed-eye visuals of geometric shapes and colors which don't go away until the effects pass, I think I prefer the still black void and relative mental silence (apart from the soundless inner monologue) of aphantasia to a 'richer' inner world that I'm not able to shut off. Would be happy to get other aphantasics' (as well as hyperphantasics at the other end of the spectrum) perspective on this


I'm curious what you mean by "soundless inner monologue"? I don't really experience much imagery at all but my inner monologue feels very close to how talking to myself aloud feels, so I wouldn't describe it as soundless. The verbal vividness of my internal monologue is actually how I imagined people with good imagery skills "see".

On the note of advantages, I know I have something weird going on with the back right of my brain specifically (focal slowing on EEG among other things), which I am pretty certain connects to my favoring of verbal over visual thinking. So I guess it depends on the cause of a case of aphantasia, but to me it feels a little like how blind people end up with heightened hearing. I think I've really developed strong verbal skills because of it, and there are definitely advantages to having strong verbal skills.

I recall a professor showing us a study once where students performed better on an exam when they were allowed/encouraged to talk out loud to themselves while they were taking it. He was encouraging us to talk through stuff with ourselves, but at first I found the result weird because I assumed everybody was always talking to themselves in their head. That was the first time it really dawned on me there may be large individual differences in how we experience thoughts.


Late response here, but hoping you might catch it!

I'm also very verbal, as that is the form most of my thinking takes. I do have some very limited visual imagination, but it takes the form of very vague and transparent 'flash' visions, and otherwise it's mostly a kind of spatial sense (I have pretty good spatial orientation, and it often surprises me how some people with vivid visual imaginations can be very easily spatially disoriented).

My inner monologue is running most of the time, and I tend to think in complete sentences, even stopping to rephrase when it doesn't make sense. But it's soundless, as in not being perceived as a sound. I'm a musician and songwriter, so I often sing and make melodies in my head, but these too aren't perceived as sound in any way resembling what I hear with my ears. Hearing your own voice all the time also sounds like an invasive and disturbing experience to me, but I guess everyone is used to their inner world. It seems aphantasia can be uni- or multimodal. I also can't imagine smells or tastes, beyond having a vague idea of whether a combination of flavours will be good. I have heard others describe tasting chocolate in their mouth when imagining it, and that's just not something I've experienced.

So yes, I'm also fascinated by the huge range of inner experiences people have, and how it affects their personality and relationship with the world


Taking the risk of violating HN policy here, but that gave me a good chuckle. I guess child sacrifice and wanton genocide is also on the whitelist



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: