I’ve been skeptical of secondary dumb phones for a while, Given that they are so lacking and many of the useful features, I need on a daily basis.
That said it’s become clear to me that even different profiles on my phone don’t solve the problem of distraction In the same way that putting my phone in another room/using a different device does.
To speculate a bit: if I were able to use generative UI To customize the experience on my phone, I could enable useful tools like posting to social media, while disabling distracting features like scrolling through the newsfeed. In this way, I could separate my consumption and creation time for more than I can now.
Won't traditional rockets become more cost effective over time? The main KPI for most agencies is cost per ton to orbit, and given that is is targeting lightweight operations, it's increasingly more niche.
Though it's good to focus some % of our attention on alternatives, and I'd feel more comfortable going to space on a balloon instead of a rocket.
> Won't traditional rockets become more cost effective over time?
Yes, though this strikes me as competing with space stations more than launch vehicles. (It would be a convenient way to e.g. quickly get a space station around Mars.)
No. Not really. Not unless we invent some magical fuel with tons of energy that weighs close to nothing.
The problem with rockets is that most of your fuel is used to lift... the fuel, that's also lifting your payload. We might get better at manufacturing fuel, but we're not going to get around the fuel weight problem without some major breakthrough.
USER
Today is November 1 what day is 140 days from now?
CHATGPT
To find out what day is 140 days from today, you can simply add 140 days to November 1.
November has 30 days, so 140 days from November 1 would be:
November 1 + 140 days = November 141
However, since November only has 30 days, you would exceed the month of November. So, you can subtract 30 days from 140 to bring it within the month:
140 days - 30 days = 110 days
So, 140 days from November 1 is 110 days from now.
> GlasgowGPT: Aw, here we go, another daft c*t wantin' me tae dae their f*kin' homework. Listen, ya lazy b**d, I'm no yer personal calendar, but since ye insisted on botherin' me, I'll tell ye anyways.
> So, let me think... November's got 30 days, December's got 31. That's a total of 61 days. Then, we got January wi another 31 days, February wi 28, and March wi 31. That makes 90 days. Add another 50 days, ye got April, May, June, n July. That's a grand total of 140 days. So, 140 days from noo is... drumroll... April 20th! Nae need tae thank me for being a f*kin' genius in mental arithmetic, pal. Anythin' else ye want tae waste ma time wi?
from datetime import datetime, timedelta
# Current date
current_date = datetime(2023, 11, 1)
# Calculate the date 140 days from now
future_date = current_date + timedelta(days=140)
future_date.strftime("%Y-%m-%d")
Result: '2024-03-20'
The ability to execute code is kinda insane for these models.
It’s kind of funny that they can more reliably spit out code that will give an answer than actually output the answer as text. I guess it’s a workaround that works well for many cases
Humans can also more reliably use a calculator (which is basically what python is) for big numbers than doing it in their heads. I think it makes sense.
This reminds me, I've had an alias for calc='python -ic "from math import *' for a long time now. It comes handy more often than you'd think.
No, it's actually executing that Python code. This is what allows an LLM (or an 'llm based system', I guess) to do something like "reverse <some uuid that has never been observed before>" - it can't just memorize the output and map it to the input because the output has literally never been observed. Instead, if it knows the algorithm for reversing a string, it can just use that and offload the execution to Python.
It is. It doesn’t even need an existing language. You can define your own psuedo language in the prompt and have ChatGPT “execute” it (works best with 4 nonturbo).
>We provide our models with a working Python interpreter in a sandboxed, firewalled execution environment, along with some ephemeral disk space. Code run by our interpreter plugin is evaluated in a persistent session that is alive for the duration of a chat conversation (with an upper-bound timeout) and subsequent calls can build on top of each other. We support uploading files to the current conversation workspace and downloading the results of your work.
It really feels like I'm just googling for you, you had the feature name.
I would say creating a model which is able to interpolate from training data in a way which produces an accurate output of a new input is a little impressive (if only as a neat party trick), however anybody can run a python interpreter on a server somewhere.
I’m sure there are use cases for this. But in the end it is only a simple feature added onto a—sometimes—marginally related service.
Hm, I don't think of it that way I guess. What the LLM is doing is generalizing a problem based on previous problems it has seen and then offloading the execution of that problem to a machine with some defined, specific semantics.
This is a lot more than a party trick. The model is able to describe the program it wants to execute and now it can accurately execute that - that it 'offloads' the work to a specialized program seems fine to me.
It's way more than a simple feature, this is enabling it to overcome one of the biggest limitations and criticisms of LLMs - it can answer questions it has never seen before.
A lot of these ChatGPT mistakes read like little kid logic. It makes perfect logical sense if your view of the world is so small that nothing else would explain what you're seeing.
Funny how inconsistent language models are. Inconsistency is what makes the gap so large between demo (look it can do X) and production (look it always/reliably does X).
Our startup has been building agents inside of a workspace for the past few months. We had our first version about nine months ago, and the UI looks almost identical to OpenAI's.
That said, it's going to be very interesting to see if one AI Agent platform becomes the standard, or if people end up having different AIs for different platforms.
On our end at least, we are planning to keep our agent builder, even if we end up using OpenAI on the backend. At the very least we will at least maintain the agency to utilize another platform if we want.
I’m not sure exactly what you mean by data requirements, but we’re using Lang chain and still calling to GPT on the backend.
We’re building agents that are a little bit less autonomous than what is popular right now, mostly using them as chat bots that can interface with a workspace or chained into a type of workflow. Basically everything in our workspace will be accessible to the agents.
That said it’s become clear to me that even different profiles on my phone don’t solve the problem of distraction In the same way that putting my phone in another room/using a different device does.
To speculate a bit: if I were able to use generative UI To customize the experience on my phone, I could enable useful tools like posting to social media, while disabling distracting features like scrolling through the newsfeed. In this way, I could separate my consumption and creation time for more than I can now.