A good analogy is lifting. We lift to build strength, not because we need that extra strength to lift things in real life. There are plenty machinery to do that for us. But we do so for the sense of accomplishment of hitting our goals when we least expect it, seeing physical changes, and the feeling that we are getting healthier rather than chasing the utility benefits. If we perceive lifting as an utility, we realize its futile and meaningless. Instead, if we see it as a routine with positive externalities sprinkled on top, we feel a lot less pressured to do so.
As kelseyfrog commented already, the key is to focus on the action, not the target. Lifting is not just about hitting a number or getting bigger muscles (though they are great extrinsic motivators), its more of an action that we derive growth from. I have internalized the act of working out that those targets are baked into the unconscious. I don't overthink when I'm lifting. My unconscious take the lead, and I just follow. I enjoy seeing the results show up unexpectedly. It lets me grow without feeling the constant pressure of my conscious mind.
The lifting analogy can be applied to writing and other effortful pursuits. We write for the pleasure of reconciling internal conflicts and restoring order to our chaotic mind. Writing is the lifting of our mind. If we do it for comparison, then there's no point in lifting, or writing, or many other things we do after all our technological breakthroughs. Doing what we do is a means to an end, not the other way around.
What a coincidence! I think we both commented noting the phenomenon of abundance and the repercussions for us humans at the individual level. Especially from a fulfillment and autonomy point of view.
The part on stories reminded me of Neil Postman's arguments in Amusing Ourselves to Death.
> Being told a story is to be infantilised, somewhat: to suspend one’s critical faculties.
One key argument in the book is that TV shows offer significantly less information density nor coherence as that of written mediums. They are optimized to reduce cognitive load. Thus our ability to think and process information diminishes greatly when there's nothing to think about. This is essentially what storytelling is - piecing together loosely related information to elicit an emotional response. The more harmful aspect is that they give us an illusion of learning, which the author also articulated with this quote:
> ‘The story wouldn’t be any good if you came back to your normal life completely unchanged, and having learned nothing, or having had no new observation,’ Vogler told me. ‘I think that we are always searching for upgrades, improvements in our behaviour, in our performance, in our relationships with other people.’ Films, he says, offer the opportunity for ‘slight improvement’.
TV shows wrap thin veils of lessons around stories. We feel like doing something fun and learning while learning something. Why not do more? So we consume more of it and spirals down into a self-reinforcing loop. But it is often not the case in real world. Learning is challenging. It's meant to confuse you and question your preexisting beliefs. We are numbing ourselves by associating what we watched in flashy media as concrete and substantial knowledge. The real takeaway is the experience and the emotional response from those dopamine-inducing flash cuts. When we associate learning and by extension, thinking, with emotions, that's when our critical thinking degrades and we become "infantalised."
Good point. This is why truly great stories do have some content beyond entertainment, maybe they make indirectly some plausible "argument", or model a thought experiment.
Tumblr's change in policy on NSFW content was bad enough, but what made it a complete disaster was outsourcing the enforcement of that policy to a crude image classifier. A lot of non-pornographic content got removed when that happened, and a lot of users never bothered contesting those removals (either because it was too much effort, or because they were no longer maintaining their account). So a lot of content on older Tumblr accounts is just gone.
It starts to feel like a trend where OpenAI is integrating features that were previously implemented by GPT-wrapper startups into ChatGPT. While these startups have added value by enhancing user experience, the trajectory is leading towards an ecosystem where these functionalities seamlessly integrate. The future will be challenging for those startups.
> OpenAI CEO Sam Altman has a clear message for startups developing products based on OpenAI's GPTs: They should assume that the models will improve drastically with each new release, rather than relying on the current state of the technology.
> Altman uses GPT-4 as an example: Any company that builds something based solely on GPT-4 is likely to be surpassed by GPT-5 if it is as big a leap over GPT-4 as GPT-4 was over GPT-3. Those companies will be "steamrolled" by OpenAI, he says. "Not because we don't like you, but because we have a mission."
I remember even Sam said this, he said something along the lines that if you are just building something that is effectively a missing feature of ChatGPT or one of there products, they are going to end up replacing you, so you need to be building something more significant.
That is the conclusion I’ve come to with all of my AI ideas so far. Easy to be replaced by a feature in ChatGPT or Copilot. Hard to create a meaningful moat.
Furthermore, it’s increasingly clear that OpenAI is doing a “bottom-up” challenge to Microsoft and Google. I would not be surprised at all if OpenAI launches an email service to compete with Gmail, imbued with a fine-tuned model that is optimized specifically for working with email. And then a document editor… and a spreadsheet… etc. There is huge money in productivity software. Microsoft 365 generates the bulk of the “cloud” revenue on Microsoft’s P&L. IIRC, worldwide revenue from Google Workspace and Microsoft 365 - and whatever other minnows can survive underneath them - is supposed to reach $40B by 2030. I apologize for not providing the source.
I think it will be less of a replacement and more like a partnership, per se. It will be hard for OpenAI to challenge services like Gmail due to the network effect. Same with Microsoft 365: People are used to that ecosystem. The success of partnerships hinges on whether Microsoft and Google can develop their in-house models and integrate them into their core products. OpenAI's partnership with Apple was a successful example of this strategy.
A high rate of customer acquisition isn’t the same thing as a network effect.
OpenAI has some network effects baked into its product suite, but it’s not even in the same league as Microsoft’s bundling strategy (setting aside Google for a moment). Microsoft’s history is littered with competitors who had a better product, a faster adopted product, who were snuffed because of Microsoft’s superior distribution capabilities.
I’m on the train at the moment, but the big exception is the mobile market, and in hindsight that makes perfect sense - most of the phone market consumer grade, where bundling productivity software isn’t much of a value proposition.
It’s far more likely that OpenAI remain where they are on the value chain, because it’s easier to capture and integrate AI startups into their products. If they were to compete on productivity software, they need to differentiate with entirely new modalities of AI-informed user interfaces, or they will be yet another slightly better program that got blown out of the water by MSFT enterprise distribution
edit: to be clear my point is, why would you become a bit player in an established market when you can dominate a new market you created yourself?
I think OpenAI is targeting clear enterprise use-cases for what they're building into ChatGPT. Data Analysis is a clear enterprise use-case. So, if a feature helps them sell ChatGPT Enterprise, I think they'll build it, since that's a large revenue driver for that.
Consumer-focused wrapper startups like jenni.ai, research helpers, or math tutors I doubt they'll focus on, since most of the revenue is in enterprise.
Absolutely. However, as OpenAI forms partnerships and begins to offer ChatGPT as a plugin across various platforms, many of those niche applications would be replaced.
I was looking at Vertex AI agents today, their implementation of agents is also like how 100 other startups have done it, no innovation nothing. Same copy pasta stuff
My experience using GPT4-Turbo on math problems can be divided into three cases in terms of the prompt I use:
1. Text only prompt
2. Text + Image with supplemental data
3. Text + Image with redundant data
Case 1 generally performs the best. I also found that reasoning improves if I convert the equations into Latex form. The model is less prone to hallucinate when input data are formulaic and standardized.
Case 2 and 3 are more unpredictable. With a bit of prompt engineering, they may give out the right answer after a few attempts, but most of the time they make simple logical error that can be avoided easily. I also found that multimodal models tend to misinterpret the problem premise, even when all information are provided in the text prompt.
Time series data are inherently context sensitive, unlike natural languages which follow predictable grammar patterns. The patterns in time series data vary based on context. For example, flight data often show seasonal trends, while electric signals depend on the type of sensor used. There's also data that appear random, like stock data, though firms like Rentech manage to consistently find unlerlying alphas. Training a multivariate time series data would be challenging, but I don't see why not for specific applications.
Is Rentech the only group that genuinely manages to predict stock price? Seems like the very observation that it’s still possible would be enough motivation for other groups to catch up over such a long period.
Also, the first realistic approximation of Solomonoff induction we achieve is going to be interesting because it will destroy the stock market.
I’m referring to RenTech’s well known Medallion fund, which I believe is now only available internally to longtime employees. Even in the article you linked, it says this fund has “continued to shine”.
If you think about it a little bit...And you read "Fooled By Randomness", there are 20 other tricks they could be playing here...Instead of "predicting" the market.
Maybe that would be a good thing. I wouldn't mourn the destruction of the stock market as it's just a giant wealth-gap increasing casino. Trading has nothing to do with underlying value.
>The stock market is just a giant machine that pulls money out of systems.
So you think the multi-trillion dollar stock market, consisting of thousands of global companies, has no use beyond "pulling money out of systems"? Weird.
Agreed, if stock prices were predictable by some technical means, they would be quickly driven to unpredictability by people trading on those technical indicators.
This is that old finance chestnut. Two finance professors are walking down the hall and one of them spots a twenty dollar bill. He goes to pick it up but the other professor stops him and says "no don't bother. If there was twenty dollars there someone would have already picked it up"
Yes, people arbitrage away these anomalies, and make billions doing it.
And like all deep learning forecasting models thus far, it makes for a nice paper but is not worth anyone using for a real problem. Much slower than the classical methods it fails to beat.
That's fair, but they stopped saying it about CV models in 2012. We've been saying this about foundational forecasting models since...2019 at least, probably earlier. But it is a harder problem!
This tool reminds me that the human body functions much like a black box. While physics can be modeled with equations and constraints, biology is inherently probabilistic and unpredictable. We verify the efficacy of a medicine by observing its outcomes: the medicine is the input, and the changes in symptoms are the output. However, we cannot model what happens in between, as we cannot definitively prove that the medicine affects only its intended targets. In many ways, much of what we understand about medicine is based on observing these black-box processes, and this tool helps to model that complexity.
>However, if the radio has tunable components, such as those
found in my old radio (indicated by yellow arrows in Figure 2,
inset) and in all live cells and organisms, the outcome will not be
so promising. Indeed, the radio may not work because several
components are not tuned properly, which is not reflected in their
appearance or their connections. What is the probability that this
radio will be fixed by our biologists? I might be overly pessimistic,
but a textbook example of the monkey that can, in principle, type
a Burns poem comes to mind. In other words, the radio will not
play music unless that lucky chance meets a prepared mind.
I’d say it’s always been the case for medicine, when people first used medicines, the intention was never to fully understand what happens, just save a life, eliminate or reduce symptoms.
Now we’ve built explainable systems like computers and software, we try to overlay that onto everything and it might not work.
To quote Alan Watts, humans like to try square out wiggly systems because we’re not great and understanding wiggles.
As kelseyfrog commented already, the key is to focus on the action, not the target. Lifting is not just about hitting a number or getting bigger muscles (though they are great extrinsic motivators), its more of an action that we derive growth from. I have internalized the act of working out that those targets are baked into the unconscious. I don't overthink when I'm lifting. My unconscious take the lead, and I just follow. I enjoy seeing the results show up unexpectedly. It lets me grow without feeling the constant pressure of my conscious mind.
The lifting analogy can be applied to writing and other effortful pursuits. We write for the pleasure of reconciling internal conflicts and restoring order to our chaotic mind. Writing is the lifting of our mind. If we do it for comparison, then there's no point in lifting, or writing, or many other things we do after all our technological breakthroughs. Doing what we do is a means to an end, not the other way around.
reply