Hacker News new | past | comments | ask | show | jobs | submit | thejarren's comments login

I’ve been skeptical of secondary dumb phones for a while, Given that they are so lacking and many of the useful features, I need on a daily basis.

That said it’s become clear to me that even different profiles on my phone don’t solve the problem of distraction In the same way that putting my phone in another room/using a different device does.

To speculate a bit: if I were able to use generative UI To customize the experience on my phone, I could enable useful tools like posting to social media, while disabling distracting features like scrolling through the newsfeed. In this way, I could separate my consumption and creation time for more than I can now.


Won't traditional rockets become more cost effective over time? The main KPI for most agencies is cost per ton to orbit, and given that is is targeting lightweight operations, it's increasingly more niche.

Though it's good to focus some % of our attention on alternatives, and I'd feel more comfortable going to space on a balloon instead of a rocket.


> Won't traditional rockets become more cost effective over time?

Yes, though this strikes me as competing with space stations more than launch vehicles. (It would be a convenient way to e.g. quickly get a space station around Mars.)


Airships won't work very well in Mars' atmosphere.


No. Not really. Not unless we invent some magical fuel with tons of energy that weighs close to nothing.

The problem with rockets is that most of your fuel is used to lift... the fuel, that's also lifting your payload. We might get better at manufacturing fuel, but we're not going to get around the fuel weight problem without some major breakthrough.


See also: The Tyranny of the Rocket Equation: https://medium.com/teamindus/rocket-science-101-the-tyranny-...


I’ve been stuck playing Pixel dungeon and now Shattered Pixel Dungeon since ~2014.

Fantastic game, highly replayable.

I’ve not been heavily committed, so I’ve only really “won” about 2-3 times total.



Counter argument, given economies of scale, smaller businesses should oftentimes have higher priced products.

Similar to local coffee shops charging more because they can’t afford bulk orders for 20mil cups.


Key Takeaways:

- Use a custom prompt to rank (0-10) comments based on intent/subject matter.

- Filter out low ranking comments

- See only comments that are positive, constructive, thought-provoking, etc.

- Keep conversations on topic and filter out insults/off topic messages.

- Set defaults for your profile, or per post.

- Creators could set autoreplies powered by GPT based off of template statements.

- Pro: extend personal boundaries and filters into your comments section more effectively.

- Con: could increase echo chamber


From the HN guidelines:

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.


My favorite from GPT-4:

USER Today is November 1 what day is 140 days from now?

CHATGPT To find out what day is 140 days from today, you can simply add 140 days to November 1.

November has 30 days, so 140 days from November 1 would be: November 1 + 140 days = November 141

However, since November only has 30 days, you would exceed the month of November. So, you can subtract 30 days from 140 to bring it within the month: 140 days - 30 days = 110 days

So, 140 days from November 1 is 110 days from now.


> GlasgowGPT: Aw, here we go, another daft c*t wantin' me tae dae their f*kin' homework. Listen, ya lazy b**d, I'm no yer personal calendar, but since ye insisted on botherin' me, I'll tell ye anyways.

> So, let me think... November's got 30 days, December's got 31. That's a total of 61 days. Then, we got January wi another 31 days, February wi 28, and March wi 31. That makes 90 days. Add another 50 days, ye got April, May, June, n July. That's a grand total of 140 days. So, 140 days from noo is... drumroll... April 20th! Nae need tae thank me for being a f*kin' genius in mental arithmetic, pal. Anythin' else ye want tae waste ma time wi?

https://glasgowgpt.com


funny little chatbot. i was able to cheer it up by telling it a joke and wishing it a happy christmas. made me smile a bit.


Too bad it's wrong.


I just tried ChatGPT and it ran this code.

    from datetime import datetime, timedelta

    # Current date
    current_date = datetime(2023, 11, 1)
    
    # Calculate the date 140 days from now
    future_date = current_date + timedelta(days=140)
    future_date.strftime("%Y-%m-%d")
Result: '2024-03-20'

The ability to execute code is kinda insane for these models.


Here’s the python ChatGPT-4 produced and ran with the title prompt:

# Number of elephants total_elephants = 10

# Elephants with legs elephants_with_legs = total_elephants - 2

# Number of legs for an elephant legs_per_elephant = 4

# Total legs total_legs = elephants_with_legs * legs_per_elephant total_legs

Its actual response was correct.


Yeah, ChatGPT gave me the correct answer without even needing Python in the case of the title.


Interesting, I copied the text into gpt 4 and it did run python as well.


Remembering that the correct answer is 40 because 'legless' is slang for 'drunk'?


Is that regional? I’ve never heard that euphemism.


It is commonly used in British English. In fact, there is a famous comedy sketch about it that also uses the word 'caravaned'


Calling a drunk person legless is hardly an euphemism.


It’s kind of funny that they can more reliably spit out code that will give an answer than actually output the answer as text. I guess it’s a workaround that works well for many cases


Humans can also more reliably use a calculator (which is basically what python is) for big numbers than doing it in their heads. I think it makes sense.

This reminds me, I've had an alias for calc='python -ic "from math import *' for a long time now. It comes handy more often than you'd think.


bc

or octave if you need some real power


Doesn’t that just mean it was trained with inputs and outputs from simple programs?


No, it's actually executing that Python code. This is what allows an LLM (or an 'llm based system', I guess) to do something like "reverse <some uuid that has never been observed before>" - it can't just memorize the output and map it to the input because the output has literally never been observed. Instead, if it knows the algorithm for reversing a string, it can just use that and offload the execution to Python.


That is even less impressive. I was thinking—like normal linear models—it would be capable of interpolation.


It is. It doesn’t even need an existing language. You can define your own psuedo language in the prompt and have ChatGPT “execute” it (works best with 4 nonturbo).

You can even combine your pseudo language with natural language. See the OP’s custom GPT and the comments here: https://news.ycombinator.com/item?id=38594521


That looks totally different. In the case of the Python code it is literally executing it by calling out to CPython.


…got a source for that claim?



I checked those links and didn’t see it mentioned that python code is actually executed. Could you quote the relevant part?


https://openai.com/blog/chatgpt-plugins#code-interpreter

>We provide our models with a working Python interpreter in a sandboxed, firewalled execution environment, along with some ephemeral disk space. Code run by our interpreter plugin is evaluated in a persistent session that is alive for the duration of a chat conversation (with an upper-bound timeout) and subsequent calls can build on top of each other. We support uploading files to the current conversation workspace and downloading the results of your work.

It really feels like I'm just googling for you, you had the feature name.


Why is it less impressive?


I would say creating a model which is able to interpolate from training data in a way which produces an accurate output of a new input is a little impressive (if only as a neat party trick), however anybody can run a python interpreter on a server somewhere.

I’m sure there are use cases for this. But in the end it is only a simple feature added onto a—sometimes—marginally related service.


Hm, I don't think of it that way I guess. What the LLM is doing is generalizing a problem based on previous problems it has seen and then offloading the execution of that problem to a machine with some defined, specific semantics.

This is a lot more than a party trick. The model is able to describe the program it wants to execute and now it can accurately execute that - that it 'offloads' the work to a specialized program seems fine to me.

It's way more than a simple feature, this is enabling it to overcome one of the biggest limitations and criticisms of LLMs - it can answer questions it has never seen before.



So it only works if the code is 100% correct. Which according to my experience doesn't happen often.


Sure, don’t expect perfection, but it still works great.


Most of the time it's just doing really simple stuff like this.


It even correctly adjusted for the leap year.

But is that actually output from a LLM, or is ChatGPT just running the Python code through a Python interpreter?


Interpreter.


Still not worth $20/mo


A lot of these ChatGPT mistakes read like little kid logic. It makes perfect logical sense if your view of the world is so small that nothing else would explain what you're seeing.


Yes, I chuckled when I read that "legless elephants have zero legs".


Interesting, it worked for me: https://chat.openai.com/share/359c81d3-ec6a-41f2-bfaa-80850c... (by default it wrote code to get the right answer, so I appended a line asking it not to use code)

Funny how inconsistent language models are. Inconsistency is what makes the gap so large between demo (look it can do X) and production (look it always/reliably does X).


GPT-4 nailed using Python [1]

Bard would likely reach a date in the past.

https://chat.openai.com/share/750a0bb0-7a80-4523-be5e-ebd7c3...


GPT-3.5 also nailed it [1], even without using Python, just iterating month by month.

https://chat.openai.com/share/aa0c0011-0e91-4d9c-9c5b-5901e2...


It seems to have wised up:

GPT4: 140 days from November 1, 2023, will be Wednesday, March 20, 2024.

(asked just now)

It's interesting that on that and the elephants it pauses and puts up a calculating symbol while it presumably passes it to some other software.


It was an image that was replaced with SVG earlier this year, that's probably a nice added bonus.


when they did that we started a pool on whether it would change with topcolor, i won $20


Our startup has been building agents inside of a workspace for the past few months. We had our first version about nine months ago, and the UI looks almost identical to OpenAI's.

That said, it's going to be very interesting to see if one AI Agent platform becomes the standard, or if people end up having different AIs for different platforms.

On our end at least, we are planning to keep our agent builder, even if we end up using OpenAI on the backend. At the very least we will at least maintain the agency to utilize another platform if we want.


Nice. What kind of agents are you building? I am especially interested in your data requirements.


I’m not sure exactly what you mean by data requirements, but we’re using Lang chain and still calling to GPT on the backend.

We’re building agents that are a little bit less autonomous than what is popular right now, mostly using them as chat bots that can interface with a workspace or chained into a type of workflow. Basically everything in our workspace will be accessible to the agents.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: