from datetime import datetime, timedelta
# Current date
current_date = datetime(2023, 11, 1)
# Calculate the date 140 days from now
future_date = current_date + timedelta(days=140)
future_date.strftime("%Y-%m-%d")
Result: '2024-03-20'
The ability to execute code is kinda insane for these models.
It’s kind of funny that they can more reliably spit out code that will give an answer than actually output the answer as text. I guess it’s a workaround that works well for many cases
Humans can also more reliably use a calculator (which is basically what python is) for big numbers than doing it in their heads. I think it makes sense.
This reminds me, I've had an alias for calc='python -ic "from math import *' for a long time now. It comes handy more often than you'd think.
No, it's actually executing that Python code. This is what allows an LLM (or an 'llm based system', I guess) to do something like "reverse <some uuid that has never been observed before>" - it can't just memorize the output and map it to the input because the output has literally never been observed. Instead, if it knows the algorithm for reversing a string, it can just use that and offload the execution to Python.
It is. It doesn’t even need an existing language. You can define your own psuedo language in the prompt and have ChatGPT “execute” it (works best with 4 nonturbo).
>We provide our models with a working Python interpreter in a sandboxed, firewalled execution environment, along with some ephemeral disk space. Code run by our interpreter plugin is evaluated in a persistent session that is alive for the duration of a chat conversation (with an upper-bound timeout) and subsequent calls can build on top of each other. We support uploading files to the current conversation workspace and downloading the results of your work.
It really feels like I'm just googling for you, you had the feature name.
I would say creating a model which is able to interpolate from training data in a way which produces an accurate output of a new input is a little impressive (if only as a neat party trick), however anybody can run a python interpreter on a server somewhere.
I’m sure there are use cases for this. But in the end it is only a simple feature added onto a—sometimes—marginally related service.
Hm, I don't think of it that way I guess. What the LLM is doing is generalizing a problem based on previous problems it has seen and then offloading the execution of that problem to a machine with some defined, specific semantics.
This is a lot more than a party trick. The model is able to describe the program it wants to execute and now it can accurately execute that - that it 'offloads' the work to a specialized program seems fine to me.
It's way more than a simple feature, this is enabling it to overcome one of the biggest limitations and criticisms of LLMs - it can answer questions it has never seen before.
The ability to execute code is kinda insane for these models.