Hacker Newsnew | past | comments | ask | show | jobs | submit | LooerCell's commentslogin

If you want to read more about the topic and about the readiness of the Swiss army, I can recommend a short book by John McPhee called La Place de la Concorde Suisse


Thanks for the feedback! Should be fixed now


Hey, the model is GPT-turbo3.5 from OpenAI. I’m just using their API. Formatting is not supported at the moment but maybe I try to make it work today


Creator here, just woken up and it’s funny to see PizzaGPT on the front page of HN.

I had the idea yesterday, when reading about ChatGPT being blocked in Italy. I’m an Italian living abroad so I still have access to it, but my parents and friends living there don’t. I believe AI is a revolutionary tool that should be available to anyone, the same way internet has been.

The website was coded yesterday in a couple of hours, so it’s full of bugs and potential improvements. But I just wanted to ship something quickly. For the stack I used Nuxt 3, Tailwind and DaisyUI.

Regarding the AI, it’s literally just a wrapper of OpenAI completions API (turbo-3.5 model) and a chat interface, so it should give the same answers as the free version of ChatGPT.

I am still a bit afraid of the costs if many people start using it, so I’ve added a donate button. You can donate the equivalent of a pizza (pizzaware model?) to help keep it running


> Regarding the AI, it’s literally just a wrapper of OpenAI completions API (turbo-3.5 model) and a chat interface, so it should give the same answers as the free version of ChatGPT.

Have you tested this? My impression after a couple of tests is that its answers are worse than free ChatGPT (no idea why).

Maybe I just got unlucky with the RNG, though.


The free user interface does something behind the scenes to provide context to chatgpt but I haven't figured out what it is


> My name is George

Nice to meet you, George. How can I assist you today?

> What is my name

As an AI language model, I don't have access to your personal information such as your name. Can you please tell me your name?

You need to pass the whole chat history in your request:

    [{:role "assistant", :content "Ciao, sono PizzaGPT come posso aiutarti?"}
     {:role "user",      :content "My name is George"}
     {:role "assistant", :content "Nice to meet you, George. How can I assist you today?"}
     {:role "user",      :content "What is my name"}
     {:role "assistant", :content "Your name is George."}]
On the ChatCompletion API endpoint


I think it is doing something a bit sophisticated in choosing what to pass in the context when the conversation grows beyond the token limit



Yes, I know, but it was a deliberate choice not to send the entire history, to keep the token consumption and costs low


Not really had much time to test the results. Yeah it should be possible to tweak the settings (mostly temperature and prompt) to get better results. Also I think I’m not sending the full history right now, to avoid consuming too many tokens


>Also I think I’m not sending the full history right now, to avoid consuming too many tokens

You can ask it to summarize the dialog, thus compressing the dialog history into a few sentences. That will give you some basic dialog context.


Does it address the issue that led to forbidding ChatGPT in the first place ?


no.


Feedback:

The prompt bar seems to be in front / on top of the (generated) text. Pretty annoying.


Thanks for the feedback! Tried to solve this just now, can you check?


Yep, it seems to be much better now. Thanks for the fast response!


I think if you want the same as ChatGPT you need the use the chat API, not the completions API.


It's confusing because is actually called Chat Completion API :) https://platform.openai.com/docs/guides/chat It's the one used on ChatGPT that includes gpt-3.5-turbo and gpt-4


Recently I started using colima[0], a drop in replacement for Docker Desktop on Mac, and have seen an increase in performance and battery life. You can use all the normal docker and docker compose commands. It does not have a GUI but you can use the Docker extension on VS Code to have an overview of running containers.

[0]https://github.com/abiosoft/colima


Replaced Desktop with `colima` as well few months ago. I've been using it daily since then. I did not have any issue, sometimes I just delete / start a instance to upgrade the docker version, it only takes few minutes.

I like the fact that I decide when I upgrade, not Docker Desktop nagging me every week.


i wrote a quick howto a while ago for anyone looking to try this out https://www.swyx.io/running-docker-without-docker-desktop


Looking to convert, but I still can't understand how this is more performant. Docker Desktop has lots of engineering going into performance crossing the host/VM barrier. IIRC lima just pipes over SSH. How could that be faster?


Docker is an electron app which might explain some of the performance and battery differences. The containers don't run in electron but that extra copy of chrome is always running in the background.


It's quite trivial to have both installed and you can easily switch between Colima and Docker, I think it's worth testing it out.


I only use Docker Desktop for one thing - to see if one of my containers has accidentally started itself as amd64 instead of arm64. Sadly Colima doesn't seem to provide a way to do that.


Best part: it's QEMU so you can choose your CPU architecture and run x86_64 containers on ARM Macs


I've been telling my colleagues with M1 Macs to use the

  --platform linux/arm64
argument with Docker Desktop and that seems to be working for them.


x86_64 containers on ARM mac are extremely slow.


It seems this may get better soon, when they run via Rosetta 2 rather than QEMU(?)

https://github.com/docker/roadmap/issues/384


I found even with ARM Docker containers were already slow as it was.

I also never understood the justification for the added complexity it created, but I also don't have a dedicated ops team at my job to solve my problems.


Podman works the same way.


That works via qemu userspace emulation inside of VM so docker desktop has it too (might need to install qemu binfmt_misc hooks package)


my personal experience. Much earlier this year at work, we migrated everyone to colima and I had to support devs with their issues. So many small issues kept popping up, and was definitely not a drop in replacement for us.

The higher ups eventually let us just buy docker desktop and we are all happier now.


Exact same experience here to the point where I wasn’t sure if we worked together haha


Please help me understanding: Why is `brew install docker` not sufficient, why do you also need Colima or Docker Desktop? Is it so that there is a docker _daemon_ installed which `docker` doesn't ship with?


OSX dose not support running docker containers (or vis versa depending on your point of view). Instead you need a VM running Linux. Docker Desktop / Colima runs this VM for you.


I assume `brew install docker` just installs the docker CLI/etc, which can run on non-Linux OSes. However the docker daemon can only run on Linux, so something needs to setup a VM for it.


Homebrew handles this kinda poorly, so people are often confused. `brew install docker` installs the Docker CLI. `brew install --cask docker` installs Docker Desktop, and if you've permanently tapped homebrew-cask you'll get that instead of the Docker CLI.


omg that is terrible


Linux user here. It's not terrible. It's funny.


You are trying to be snarky at macOS users, but homebrew is BSD licensed…


Same. My requirements are very basic, so the switch to colima was basically seamless. I also appreciated being able to avoid Docker Desktop constantly trying to update itself (which is what ultimately motivated me to make the switch).

As a bonus, you can install the Docker CLI (e.g. `brew install --formula docker`) and use that to interact with any containers you start with colima.


Switched to colima a while back after the licensing debacle and have been mostly happy with it. Only real issue has been with some tools making assumptions about the docker socket location, which was easy to fix in the config file.


Thanks for sharing. I missed something like that. Docker is too enterprisy to my taste. I'm using VPS with docker right now which works good enough, but no volume mounts is not very nice.


Does anyone know if the networking lets you ping the docker vm directly?

Looks like there's an `colima start --network-address`. I just spent the day trying to get a docker vm I can ping directly.


It’s been a really doozy for me too, doubly so for local k8s in Docker (kind/k3s). I’ve tried a whole lot of variations, it’s hard and harder to scale across hundreds of devices.


Well the `colima start --network-address` lets me ping the VM and to use `--net=host`. Nice!


Sorry but off topic.

At what point did containerization begin to make sense for your workflow?


Not OP, but if it's helpful, I use containers for all 3rd party services I need to run for development. (e.g. Postgres, Redis, Localstack, etc). This makes it easy to onboard new developers as they just have to run `docker-compose up` and not worry about those. It also allows me to easily use different versions of those services in different projects or even branches of a project.


Same here, it's what I've been recommending as the alternative.


somehow colima and my corporate vpn (using vpnc) keep deleting each others routes (colima loses network access when you turn off vpnc) and neither podman machine nor rancher desktop have this issue.


“improves battery life by up to 0.5%” To me this line is quite ridiculous. We all know that Chrome is now the main responsible for battery consumption on any laptop and could be optimized much more.


At the scale that Chrome operates, 0.5% energy reduction globally is massive. All these small gains also add for individual devices as well.


It's massive in an absolute sense, so it makes sense to have a few people dedicated even to such small improvements.

But it's still tiny in a relative sense.


You mean the total energy saved across all devices? The only context in which anybody could give a fig about that is as it relates to total global energy consumption, of which it is an utterly neglible speck.

Edit: Had a go at putting some numbers on that for fun, and will partially retract my comment (no I don't, see below).

If it takes [1] around 0.01kWh to charge a smartphone, there are [2] around 6 billion smartphones, each is charged once a day and this saves 0.05 (wrong,see below) of that usage, the saving is on the order of 3GWh/day, i.e. around 125MW.

Total global energy consumption is [3] around 20 TW, so the saving is around 0.0006% of it, and this is a very generous estimate (chrome isn't all the consumption of a smartphone, not all phones are in active use and charging once a day, etc).

That said, in absolute terms it's more than I would have guessed - comparable to the electricity consumption of a small town (no, much less, probably less than the output of a single wind turbine - see below).

[1] https://www.quora.com/How-much-in-kwh-does-it-take-to-charge...

[2] https://www.statista.com/statistics/330695/number-of-smartph...

[3] https://www.theworldcounts.com/stories/current_world_energy_...

Edit 2: On reflection, I think this is an big overestimate. I suspect the real saving is on the order of a few MW.

Edit 3: As the reply says, I missed a zero, which brings it down to probably on the order of a MW.


Everything is neglible in comparison to global energy consumption though, because we use energy for so many different things.

If everyone who is in a position to reduce the energy consumption of their small part significantly does their part then it will add up to a significant effort.


I don't disagree at all, but the comment I was responding to was saying that globally it's "massive" which it really isn't. Every little helps, but this is little.


You overestimated by 10x (0.05 should be 0.005)


Ah, well spotted.


Yes, Google (and other global companies) actually cares about that. They work on fleet-level changes to make such differences worth it. It might not matter to you though.


What fleet-level changes are you referring to? The energy saving is certainly a nice thing (environmentally if nothing else) but it's not energy they are paying for. I don't see how it would change Google's operations in any way.


Yes and that was compared to... a previous version of Chrome. So v94 is 0.5% better on battery that v93. I don't even get why this is news.


I kinda want to see a standard deviation on those measurements.


Genuine question: In what kind of places?


The remote controller is just radio, not Bluetooth. But inside the board there is also a Bluetooth module that connects just to a smartphone app. Data remains in the app. You can then decide to upload it to a web service in case you want to see the data on an interactive map or you want to share it. Anyway, it's just riding data, it's not associated with any of your personal information.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: