Hacker News new | past | comments | ask | show | jobs | submit | yonixw's favorites login

Why compare to 2023 data? there's 2024 data readily available. https://energyandcleanair.org/december-2024-monthly-analysis...

1.) there's no need to focus on barrels per day; fuel export revenue per day is more important. and from the link, we can see that it was around 1B EUR per day in 2022, and now it's around 600M EUR per day in Dec. 2024. Not great, but as we will see, it's mainly propped up by China.

2.) top 4 buyers of Russia fossil fuels in December 2024 are China, Turkey, India, and EU. China being top buyer is not surprising, given they are allied in the war - China is cutting off drones to Ukraine while supplying more to Russia.

3.) fossil fuel shipment departures from Russia has steadily declined from 80% in Jan 2022, to less than 20% in December 2024.


We went to a lot of trouble to make our magic link implementation work with anti-phishing software, corp link checkers and more. https://github.com/FusionAuth/fusionauth-issues/issues/629 documents some of the struggle.

I think that a link to a page where you enter a one time code gets around a lot of these issues.


I recently tried to homebrew some anomaly detection work for a performance tracking project and was surprised at the absence of any off-the-shelf OSS or Paid solutions in this space (that weren’t super basic or way too complex). Lots of fertile ground here!

I write documentation for a living (a different, non-tech kind). The best resources in my opinion are the writing guides of various governments. Gov.uk leads the way, but the Australian government puts out great guides too.

Steve Krug's "Don't make me think" is old but still applies to the modern web.


I was curious as to the security context this runs in:

    curl -i 'https://porcini.us-east.host.bsky.network/xrpc/com.atproto.sync.getBlob?did=did:plc:j22nebhg6aek3kt2mex5ng7e&cid=bafkreic5fmelmhqoqxfjz2siw5ey43ixwlzg5gvv2pkkz7o25ikepv4zeq'
Here are the headers I got back:

    x-powered-by: Express
    access-control-allow-origin: *
    cache-control: private
    vary: Authorization, Accept-Encoding
    ratelimit-limit: 3000
    ratelimit-remaining: 2998
    ratelimit-reset: 1732482126
    ratelimit-policy: 3000;w=300
    content-length: 268
    x-content-type-options: nosniff
    content-security-policy: default-src 'none'; sandbox
    content-type: text/html; charset=utf-8
    date: Sun, 24 Nov 2024 20:57:24 GMT
    strict-transport-security: max-age=63072000
Presumably that ratelimit is against your IP?

"access-control-allow-origin: *" is interesting - it means you can access content hosted in this way using fetch() from JavaScript on any web page on any other domain.

"content-security-policy: default-src 'none'; sandbox" is very restrictive (which is good) - content hosted here won't be able to load additional scripts or images, and the sandbox tag means it can't run JavaScript either: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...


i make these headings and write down the answers:

  what is it
  how do i install it
  how do i use it
    any "pro-tips"/useful stuff to know
  how do i get help (optional)
  acknowledgements etc (optional)
too often i see things skip the "what is it" part

All people I know in the EU who build something use Lemonsqueezy, Gumroad or Paddle to invoice their customers. 3 companies not in the EU which are charging an arm and a leg to do the complicated invoicing for EU makers.

It's kind of strange, how the EU manages to not only put their companies at a disadvantage but also drive new business for non-EU companies.

Some of my EU friends even consider only offering their services to US customers, because with those, the invoicing is easy. It's called something like "3rd party country revenue" in the EU tax system and has no additional bureaucracy attached to it. While doing business with customers from within the EU is so complicated, that none of my EU friends understands it.


> Key extraction is difficult but not impossible.

Refer to the never-ending clown show that is Intels SGX enclave for examples of this.

https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...


Some of my git aliases might be useful to folks here (I post them from time to time on git threads)

Time to share my gitconfig aliases again :D

  lol = !git --no-pager log --graph --decorate --abbrev-commit --all --date=local -25 --pretty=short
  sw = !git checkout $(git branch -a --format '%(refname:short)' | sed 's~origin/~~' | sort | uniq | fzf)
  lc = !git rev-parse HEAD
  rb = !git for-each-ref --sort=-committerdate refs/heads/ --format='%(refname:short) %(objectname:short) %(committerdate:format:%F)' | column -t
  fza = "!git ls-files -m -o --exclude-standard | fzf -m --print0 | xargs -0 git add"
  gone = "!f() { git fetch --all --prune; git branch -vv | awk '/: gone]/{print $1}' | xargs git branch -D; }; f"
  root = rev-parse --show-toplevel
  oldest-ancestor = !zsh -c 'diff -u <(git rev-list --first-parent "${1:-main}") <(git rev-list --first-parent "${2:-HEAD}") | sed -ne \"s/^ //p\" | head -1' -
  diverges = !sh -c 'git rev-list --boundary $1...$2 | grep "^-" | cut -c2-'
  dlog = "!f() { GIT_EXTERNAL_DIFF=difft git log -p --ext-diff $@; }; f"
Some of these I don't use much, but others I use every day:

"git fza" (aliased to "ga") shows all unstaged files in fzf and you can use space to toggle them, then hitting enter finishes adding/staging them. This is great for selecting some files to stage. I use this one every day, it makes my workflow just a little better :)

"git gone" deletes local branches that don't exist in the remote. I just saw in this thread that git remote prune origin might do the same thing, I need to test that.

"git lol" is a log alias.

"git oldest-ancestor brancha branchb" does what it says.

"git root" is part of an alias "gr" which runs "cd $(git root)". That takes you to the project root, and "cd -" will take you back to your previous location.

"git dlog" shows a detailed commit log.

"git lc" just shows the last commit.

"git rb" shows recent branches. Piping it to "| sort -k3" will sort by date. (I really need to update that!)

"git sw" shows branches in fzf, hit enter on one and you checkout that branch.


Lots of Copies Keeps Stuff Safe

https://www.lockss.org/

This is a brilliant system relying on a randomised consensus protocol. I wanted to do my info sec dissertation on it, but its security model is extremely well thought out. There wasn't anything I felt I could add to it.


You might think that would be it?

    $ node -e "process.stdout.write('@'.repeat(128 * 1024)); process.stdout.write(''); setTimeout(()=>{ process.exit(0); }, 0);" | wc -c
    131072
Sure. But not so fast! You're still just racing and have no guarantees. Increase the pressure and it snaps back:

    $ node -e "process.stdout.write('@'.repeat(1280 * 1024)); process.stdout.write(''); setTimeout(()=>{ process.exit(0); }, 0);" | wc -c
    65536
Meanwhile:

    $ node -e "const { Stream } = require('stream'); s = process.stdout; s.write('@'.repeat(1280 * 1024)); Stream.finished(s, () => { process.exit(0); })" | wc -c
    1310720

I got the ebook when it came out, and is relatively nice as ramp up into the world of compiler development.

People interested in this project might also be interested in Cloudflare's webrtc streaming service¹ as a cloud hosted solution to this same problem. "Sub-second latency live streaming (using WHIP) and playback (using WHEP) to unlimited concurrent viewers." Using the same OBS WHIP plugin, you can just point to Cloudflare instead. Their target pricing model is $1 per 1000 minutes.² Which equates to $0.06 per hour streamed.

¹ https://developers.cloudflare.com/stream/webrtc-beta

² https://blog.cloudflare.com/webrtc-whip-whep-cloudflare-stre...


> If you would like to use an alternative to AWS CodeCommit given this news, we recommend using GitLab, GitHub, or another third party source provider of your choice. We have written a blog which describes how to migrate your repository to one of these other solutions.

I found that blog post: "How to migrate your AWS CodeCommit repository to another Git provider" from 25th July 2024 https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-...

I wonder how long AWS will keep Code Commit running for their customers who are already using it? I'm guessing many, many years.

Weird that there's no announcement anywhere (that I can find) about CodeCommit ceasing to onboard new customers. Apparently it happened on June 6th but this forum post from July 26th is the only thing that comes up in search.


And they wonder in random organizations and businesses that I am not willing to give all my personal details right away on first contact despite their 'utmost importance' of handling my data very securely, all this just to be informed about their product. And they seems to be offended with a "but we did it so for many years now" on my refusal and saying goodbye if they try to insist this "company policy".

Unluckily sooo many give zero or negative fáck among their potential and existing customers. This includes businesses providing medical services sending all the clien't data and medical results in clear text email and even declaring for their own convenience that "The property and copyright or other intellectual property rights in the contents of any document or images provided to you shall remain our property", for your ultrasound results. Your medical results are their property for those use their services. So they do as they plase with their data, not your data, not your concern if it is protected or not. And people go there and rate this service 4.8 on google, insane. Of course no-one really reads TOC, not even for sensitive medical services. People do not learn.


There are a few services that do this already, but they are all somewhat lacking, hopefully Meta's paper / solution brings some significant improvements in this space.

The existing ones:

- Meshy https://www.meshy.ai/ one of the first movers in this space, though it's quality isn't that great

- Rodin https://hyperhuman.deemos.com/rodin newer but folks are saying this is better

- Luma Labs has a 3D generator https://lumalabs.ai/genie but doesn't seem that popular



I have a nodejs passkey implementation over at AuthC https://github.com/authcompanion/authcompanion2 a simple user management server. For javascript developers https://github.com/MasterKale/SimpleWebAuthn has been a good way to get started with a poc before venturing deeper into webauthn (passkeys) spec.

Qlora + axolotl + good foundation model (llama/mistral/etc, usually instruction fine tuned) + runpod works great.

A single A100 or H100 with 80GB VRAM can fine tune 70B open models (and obviously scaling out to many nodes/GPUs is faster, or can use much cheaper GPUs for fine tuning smaller models.)

The localllama Reddit sub at https://www.reddit.com/r/LocalLLaMA/ is also an awesome community for the GPU poor :)


This appears to suffer from the same mistake as many of these things do in this space: it focuses on making it really easy to run lots of software, but has a very poor story when it comes to making the data and time you put in safe across upgrades and issues. The only documented page on backing up requires taking the entire system down, and there appears to be no guidance or provision for safely handling software upgrades. This sets people up for the worst kind of self-hosting failure: they get all excited about setting up a bunch of potentially really useful applications, invest time and data into them, then get burned really badly when it all comes crashing down in an upgrade or hardware failure, with improper preparation. This is how people move back to saas and never look back again, it's utterly critical to get right, and completely missing here.

Same here. .tk was the only one back then that allowed you to have your own domain name without subdomains. My memory is that:

1. freeserver.com/~userna <- This was the first URl you could have, sometimes with something inside another directory (freeserver.com/users/u/~usernam).

2. username.freeserver.com <- This wasn't that bad but it didn't look professional. Tripod used to do this.

3. username.fs.com <-- A service with a short domain that provided free subdomains. This was similar to 2 but shorter. Some of them allowed you to chose the domian part.

4. username.tk <-- Among all the free options, this was the best one by far.

Then we grew up a bit and started paying domains :')


Opus (and all prior Claude models) still do not correctly answer the Sally question, unfortunately:

Q: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?

A: Okay, let's organize the information we have:

Sally is a girl. Sally has 3 brothers. Each of Sally's brothers has 2 sisters. Now, let's think about this step by step:

We know that Sally is a girl, so she is a sister to her brothers. Each of Sally's brothers has 2 sisters. One of these sisters is Sally, but who is the other sister? Since Sally has only brothers and no other sisters are mentioned, we can conclude that the other sister for each brother is also Sally.

Therefore, Sally has no sisters other than herself. So, the answer is that Sally has 0 sisters.

https://imgur.com/a/EawcbeL


I use this script, because automatically generated subtitles are badly formatted as transcript (only good as subtitles). It works pretty well to archive the videos including the transcript and subtitles.

```

#!/bin/zsh

# download as mp4, get normal subtitles

yt-dlp -f mp4 "$@" --write-auto-sub --sub-format best --write-sub

# download subtitles and convert them to transcript

yt-dlp --skip-download --write-subs --write-auto-subs --sub-lang en -k --sub-format ttml --convert-subs srt --exec before_dl:"sed -e '/^[0-9][0-9]:[0-9][0-9]:[0-9][0-9].[0-9][0-9][0-9] --> [0-9][0-9]:[0-9][0-9]:[0-9][0-9].[0-9][0-9][0-9]$/d' -e '/^[[:digit:]]\{1,3\}$/d' -e 's/<[^>]>//g' -e '/^[[:space:]]$/d' -i '' %(requested_subtitles.:.filepath)#q" "$@"

```


Remember when Google was cool and not evil and released their book scanner project for free? https://code.google.com/archive/p/linear-book-scanner/

  "Google hereby grants to you a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, transfer, and otherwise run, modify and propagate this design..."

the whole Sketch -> Invision -> Figma pipeline was quick — never seen incumbents displaced so quickly.

People really need to understand that Google is a garbage company that can't be relied on for anything. I've spent years as a consultant, and several more in the startup world, and I'd rather host all my content in us-east-1 than rely on google.

* I've had customers get their cloud accounts shut down for a literal three cent billing error,

* I've seen people who use Google for advertising pay thousands of dollars and suddenly stop getting their ads shown,

* I personally ran a website with Google Ads, and the first time I tried to cash out was accused of click fraud and had all the earned money stolen,

* I've had product managers for GCP lecture me that I shouldn't be using "preview" services (despite services being in preview for years), and then tell my I should have used a different service . . . that was also in preview.

One of my more recent startups was actually funded by a Google based venture firm, and they wanted to know why we refused to use GCP. My answer is that I wanted the startup to actually succeed, and it wasn't worth the risk of dealing with Google's horrible support.


Developer's dont write tests if writing tests is hard. Simple as that. If writing tests is hard because you never invested in setting up a good test infrastructure with helpful utilities, you fucked up. If writing tests is hard because your architecture is a cluster fuck of mixed responsibilities, you fucked up.

Unless you have specific needs, the only type of UUID you should care about is v4.

v1: mac address + time + random

v4: completely random

v5: input + seed (consistent, derived from input)

v7: time + random (distributed sortable ids)


Model development on Prophet stopped this year: https://medium.com/@cuongduong_35162/facebook-prophet-in-202...

They recommend checking out these for cutting-edge time series forecasting:

https://neuralprophet.com/

https://nixtla.github.io/statsforecast/


What I've seen in most companies of that size is that the CEO or founder holds the master passwords and everyone else uses IAM or OAuth or equivalent.

AWS recommends actually throwing away the root key after you grant full access to an IAM user. In the rare case you need it, you can recover it via support.

But generally speaking a password manager with built in credential sharing is your best bet. In most cases the CEO would own that account, or in a good org ownership is shared and/or split amongst a few top execs.

But if you want a model to not follow, don't just share the AWS root key with all devs. That's what we did at reddit when we first started, before any best practices existed (And before IAM existed).


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: