Hacker Newsnew | past | comments | ask | show | jobs | submit | more glitchc's commentslogin

No support for symbols, amirite?


It's time to bring in legislation that limits what percentage of total compensation can be comprised of equities. It will cap the TC package and somewhat disincentivize short-term stock price thinking that currently dominates the boards of large corporations.


> It's time to bring in legislation that limits what percentage of total compensation can be comprised of equities. It will cap the TC package

Why? You’d just pay more cash. Or incentive bonds or whatnot.

Just add a marginal tax tier at $10mm or whatever.


More cash automatically means more taxes. It gets reported as salary. Payment in equities is essentially a tax dodge as the only time they're taxed is when they get sold (capital gains).


> More cash automatically means more taxes. It gets reported as salary.

Pay more to cover the taxes. Or, as I suggested, pay in assets that aren't equity. Bonds, for instance. Or, like, buy the executive's side business once a year.

Regulating compensation is a silly way to get around raising taxes on the rich.


capping CEO TC in any way whatsoever is impossible, they'll always find a loophole, not to mention it's a feature, not a bug, for the current administration (and probably for any administration).

better incentivize companies to give CEOs more money if their employees get more money and also disincentivize short term stock price manipulation.


Everyone knows this. Every layperson I talk to is aware that these companies are siphoning their information. When free email was introduced over two decades ago, the behaviour was the same. Everyone knew Microsoft and Google could read your emails. Then, like now, people think it's worth it. It is too useful a tool to have and the price is palatable.

What people don't want to do is sign up for yet another subscription. There's immense subscription fatigue among the general population, especially in tough economic times such as now.


Agreed. Not only do I think it’s worth it, i actually like that I can contribute. I’m getting so much good value for free I think it’s fair. It’s a win-win situation. The AIs get better and I get better answers.


This is a funny take. I love your optimism, but it's so extremely naive, it should have a name.


It’s not naive. The value these ai chatbots provide to me is extremely high.

I’ve been writing code for many years but one of the areas I wanted to improve was debugging, I’ve always printed variables but last month I decided to start using a debugger instead of logging to the console. For the past weeks I’ve only been using breakpoints and the resume program function because the step-into, over, out functions have always been confusing to me. An hour ago I sent Gemini images of my debugger and explained my problem and it actually told me what to do and it actually explained to me what the step-* functions did and it told me what to do step by step (I sent it a new screenshot after each step and told it to explain to me what was going on).

I now have a much better understanding of how debuggers work thanks to Gemini.

I’m fine with google getting my data, the value I just got was immense.


I got that from your first post. As with every game context a win-win is only possible in non-zero sum with a relatively balanced benefit. It's clear that you can see the value you get and maybe even quantify it. However you can't quantify the other side, nor the degree to which its win will affect your win on a relatively short term (a few years tops).

Two things come to mind

The less relevant one, is that as a coder, once there's a good enough model (good enough = benefit/cost) your "win" will get to 0. And your contribution to what will make that win 0 is going to be non-0, but you're not going to get anything.

The more relevant one, longer term, is that you may end up being predictable (a good model of yourself) that will be able to extract value out of you personally forever, again without anything for you to gain.

Both may be argued against, or that they are unavoidable, regardless. But in either case, your "price point" has been arbitrarily chosen, at least from your perspective. I.e. it's not an informed choice on your end. A bit like the Monty Hall problem, you chose a door with little information. The act of sticking to the door you chose is why you're naive.


I'm not sure I follow: Both Apple and Amazon are working on AI as we speak. They're just not following the popular approach of releasing a chatbot in the wild.

Apple is focusing on a privacy-first approach with smaller models that run locally. Amazon is tying it's models to an AWS subscription and incentivizing use by offering discounts, making it cheaper to use their models over GPT, Opus, etc.


Apple is not focusing on AI with any real emphasis. The engineers asked for $50B to train a model and Apple instead did stock buybacks. The stock kept underperforming and so they touted apple intelligence and a revamped Siri, only for it to fall flat. Siri was underinvested in for many years and should be at least as good as Claude or ChatGPT. ‘They’re not investing in a chatbot’ is a huge miss by apple who had a chatbot on everyone’s devices and a headstart on the whole concept.


> The engineers asked for $50B to train a model and Apple instead did stock buybacks.

It is probably cheaper to simply integrate with OpenAI or Anthropic or whoever might unseat them in the future, than spend $50B on training a model. Not only is it cheaper, but it also gives them the flexibility to ride the wave of popularity, without ceding hardware or software sales.


And in that is the issue, Apple does not believe they could do better than Google, Meta, xAI, Anthropic, or OpenAI. They are paying Google rather than building out their own products. Pre-Tim, Apple was pouring profits back into R&D but now the priority is rewarding shareholders.


It depends on what you think is going to happen with models.

The way I see it, models were always predicated on openness and data-sharing. That, too, will be the competitive downfall of those who poured billions into creating said models.

They won't stay caged up forever. Ultimately the only thing OpenAI has between itself and it's competitors is some really strong computers.

Well... anybody can buy strong computers. This is, of course, assuming you don't believe the promise of ever-increasing cognition and eventual AGI, which I don't. The people going fast aren't going to be the winners. The second movers and later on will be. They get all the model, with 1/100th the cost.

Ultimately models, right now, for most consumers, are nothing more than novelties. Just look at Google Pixel - I have one, by the way. I can generate a custom image? Neat... I guess. It's a cool novelty. I can edit people out of my pictures? Well... actually Apple had that a couple years ago... and it's not as useful as you would think.

It's hard to see it because we're programmers, but right now, the AI products are really lacking for consumers. They're not good for music, or TV, or Movies, or short-form entertainment. ChatGPT is neat for specific usecases like cheating on an essay or vibe coding, but how many people are doing that? Very, very few.

Let me put it this way. Do I think Claude Code is good? Yes. Do I think Claude Code is the next Madonna? No.


> Ultimately the only thing OpenAI has between itself and it's competitors is some really strong computers.

There's a lot of difference between OpenAI and, let's say, Facebook (Llama). The difference between them is not only strong computers. There's architectural differences to the models.


From a technical perspective, yes. But from a business perspective they're going to try to cram every model into every possible use case for as long as possible.

It will be a sign of a maturing market when we see vendors actually say "bad for X" and pulling away from general-purpose messaging.

You see it a little bit with the angling for code-specific products, but I think we're nowhere near a differentiated market.


> They are paying Google rather than building out their own products.

This is the real death knell people should focus on. Apple buried their AI R&D to rush complete flops like Vision Pro out the door. Now that the dust has settled, the opportunity cost of these hardware ventures was clearly a mistake. Apple had more than a decade to sharpen their knives and prepare for war with Nvidia, and now they're missing out on Nvidia's share of the datacenter market. Adding insult to injury, they're probably also ~10 years behind SOTA in the industry unless they hire well-paid veterans at great expense.

Apple's chronic disdain for unprofitable products, combined with boneheaded ambition, will be the death of them. They cannot obviate real innovation and competition while dropping nothingburger software and hardware products clearly intended to bilk an unconscious userbase.


Apple doesn't need its own model--it can license Google's model; or if terms are inprofitable, license Anthropic's or OpenAI's or Mistral's etc. And eventually it can build its own model.

Think of how important it is for any AI model company to be the go-to model on the iPhone. Google pays Apple billions to be the default search engine on the iPhone.


According to my information, Apple is currently being paid ~0.0 billion dollars for the privilege of being the default AI provider.

Am I supposed to keep waiting until that changes one day?


Exactly my point - OpenAI is giving Apple its model for free. Which saves Apple many $B's in compute to train their own model. You could argue it's being paid in-kind.

Unlike OpenAI, Apple doesn't need to charge a subscription to get AI-based revenue. It just needs to properly integrate it into its products that billions of people are already using, to make those products more useful so people continue buying them. At that point most users don't care what model is powering it - could be GPT, Claude, Mistral etc.


> Not only is it cheaper, but it also gives them the flexibility to ride the wave

And also to hop off without any penalty if/when the wave collapses.


Didn't Apple's research lab release some open source/weights diffusion-based LLM that was blowing away all the benchmarks?

Edit: Yes it exists, seems to be built off qwen2.5 coder. Not sure it proves the point I thought it was, but diffusion LLMs still seem neat


> ‘They’re not investing in a chatbot’ is a huge miss by apple

Why? Because everyone else is doing it (and not making a profit btw)?


Why bungle an AI release named Apple intelligence that doesn’t do what was advertised then half ass integration with OpenAI.

Something about incentive for people to buy a phone that looks and acts identical to a 5 year old phone otherwise


> The engineers asked for $50B to train a model and Apple instead did stock buybacks.

Source?


I heard this from an apple explained video, don’t know the original source https://m.youtube.com/watch?v=JUG1PlqAUJk


I’ll train their model for 49.9B


Right, but remember Microsoft was 'working on' mobile also. The issue is that they're working on it the wrong way. Amazon is focused on price and treating it like a commodity. Apple trying to keep the iPhone at the center of everything. Thus neither are fully committing to the paradigm shift because they say it is, but not acting like it because their existing strategy/culture precludes them from doing so.


> The issue is that they're working on it the wrong way.

So is everyone else, to be fair. Chat is a horrible way to interact with computers — and even if we accept worse is better its only viable future is to include ads in the responses. That isn't a game Apple is going to want to play. They are a hardware company.

More likely someday we'll get the "iPhone moment" when we realize all previous efforts were misguided. Can Apple rise up then? That remains to be seen, but it will likely be someone unexpected. Look at any successful business venture and the eventual "winner" is usually someone who sat back and watched all the mistakes be made first.


> Chat is a horrible way to interact with computers

Why? We interact with people via chat when possible. It seems pretty clear that's humanity's preferred ineraction model.


We begrudgingly accept chat as the lowest common denominator when there is no better option, but it's clear we don't prefer it when better options are available. Just look in any fast food restaurant that has adopted those ordering terminals and see how many are still lining up at the counter to chat with the cashier... In fact, McDonalds found that their sales rose by 30% when they eliminated chatting from the process, so clearly people found it to be a hinderance.

We don't know what is better for this technology yet, so it stands to reason that we reverted to the lowest common denominator again, but there is no reason why we will or will want to stay there. Someone is bound to figure out a better way. Maybe even Apple. That business was built on being late to the party. Although, granted, it remains to be seen if that is something it can continue with absent of Jobs.


> In fact, McDonalds found that their sales rose by 30% when they eliminated chatting from the process, so clearly people found it to be a hinderance.

That's a good supporting argument, but I don't think McDonald's adequately represents more complex discussions.


What is representative, though, is simple use: All you have to do is use chat to see how awful it is.

It is better than nothing. It is arguably the best we have right now to make use of the technology. But, unless this is AI thing is all hype and goes nowhere, smart minds aren't going to sit idle as progression moves towards maturity.


I imagine it's like how humans converse: we talk, but sometimes we need diagrams and pictures.

"What burgers do you have?"

(expands to show a set of pictures)

"I'll have the thing with chicken and lettuce"


The problem with UX driven by this kind of interface is latency. Right now, this kind of flow goes more like:

"What burgers do you have?"

(Thinking...) (4 seconds later:)

(expands to show a set of pictures)

"Sigh. I'll have the thing with chicken and lettuce"

(Thinking...) (3 seconds later:)

> "Do you mean the Crispy McChicken TM McSandwich TM?"

"Yes"

(Thinking...) (4 seconds later:)

> "Would you like anything else?"

"No"

(Thinking...) (5 seconds later:)

> "Would you like to supersize that?"

"Is there a human I can speak with? Or perhaps I can just point and grunt to one of the workers behind the counter? Anyone?"

It's just exasperating, and it's not easy to overcome until local inference is cheap and common. Even if you do voice recognition on the kiosk, which probably works well enough these days, there's still the round trip to OpenAI and then the inference time there. And of course, this whole scenario gets even worse and more frustrating anywhere with subpar internet.


Right. We talk when it is the only viable choice in front of us, but as soon as options are available, talk goes out the window pretty quickly. It is not our ideal mode of communication, just the lowest common denominator that works in most situations.

But, now, remember, unlike humans, AI can do things like materialize diagrams and pictures out of "thin air" and can even make them interactive right on the spot. It can also do a whole lot of things that you and I haven't even thought of yet. It is not bound by the same limitations of the human mind and body. It is not human.

For what reason is there to think that chat will remain the primary mode of using this technology? It is the easiest to conceive of way to use the technology, so it is unsurprising that it is what we got first, but why would we stop here? Chat works, but it is not good. There are so many unexplored possibilities to find better and we're just getting started.


I think chat will remain dominant, but we'll go into other modes as needed. There's no more efficient way to communicate "show me the burgers" than saying it - thinking it is possible, but sending thoughts is too far off right now. Then you switch to imagery or hand gestures or whatever else when they're a better way to show something.


> Chat is a horrible way to interact with computers

Chat is like the command line, but with easier syntax. This makes it usable by an order of magnitude more people.

Entertainment tasks lend themselves well to GUI type interfaces. Information retrieval and manipulation tasks will probably be better with chat type interfaces. Command and control are also better with chat or voice (beyond the 4-6 most common controls that can be displayed on a GUI).


> Chat is like the command line, but with easier syntax.

I kinda disagree with this analogy.

The command line is precise, concise, and opaque. If you know the right incantations, you can do some really powerful things really quickly. Some people understand the rules behind it, and so can be incredibly efficient with it. Most don't, though.

Chat with LLMs is fuzzy, slow-and-iterative... and differently opaque. You don't need to know how the system works, but you can probably approach something powerful if you accept a certain amount of saying "close, but don't delete files that end in y".

The "differently-opaque" for LLM chatbots comes in you needing to ultimately trust that the system is going to get it right based on what you said. The command line will do exactly what you told it to, if you know enough to understand what you told it to. The chatbot will do... something that's probably related to what you told it to, and might be what it did last time you asked for the same thing, or might not.

For a lot of people the chatbot experience is undeniably better, or at least lets them attempt things they'd never have even approached with the raw command line.


> Chat is like the command line

Exactly. Nobody really wants to use the command-line as the primary mode of computing; even the experts who know how to use it well. People will accept it when there is no better tool for the job, but it is not going to become the preferred way to use computers again no matter how much easier it is to use this time. We didn't move away from the command-line simply because it required some specialized knowledge to use.

Chatting with LLMs looks pretty good right now because we haven't yet figured out a better way, but there is no reason to think we won't figure out a better way. Almost certainly people will revert to chat for certain tasks, like people still use the command-line even today, but it won't be the primary mode of computing like the current crop of services are betting on. This technology is much too valuable for it to stay locked in shitty chat clients (and especially shitty chat clients serving advertisements, which is the inevitable future for these businesses betting on chat — they can't keep haemorrhaging money forever and individuals won't pay enough for a software service).


My experience with Claude Code is a fantastic way to interact with a (limited subset) of my computer. I do not think Claude is too far off from being able to do stuff like read my texts, emails, and calendar and take actions in those apps, which is pretty much what people want Siri to (reliably) do these days.


> Apple trying to keep iPhone at the centre of everything.

Mac, iPad and iPhone, eventually Watch and Vision. Which makes sense since Apple is first and foremost a hardware company.


Well no Alexa plus is the first LLM to integrate with the smart home in a big way.

Aws is making strides but in a different area.


it is a commodity. that's the paradigm shift. there is no moat


The most uniform pieces come from this onion dicer: https://latacocarts.com/products/onion-dicer

I used to work in fast food and this bad boy has a rate of 0.5 onions/sec and all of the resulting pieces are perfectly uniform squares. If you've ever wondered where the perfectly diced onions garnishing your burger came from, this is it.

It was a pain to clean though, as the blades were exceedingly sharp. Someone would cut their fingers about once a week on those things.


> It was a pain to clean though, as the blades were exceedingly sharp. Someone would cut their fingers about once a week on those things.

Much better than the various food cutting tools available to consumers which (apart from knives) are always exceedingly dull IME to the point of being useless. An the weird shapes make them impossible to sharpen yourself.


Not dishwasher safe?


These commercial tools are often odd-shaped (this one is a foot and a half tall) and not dishwasher friendly. Even if you found a way to somehow fit it into the dishwasher, the jets may not reach the blades.


I have a knife I use only to cut veggies, I never wash it with soap, just rinse it off and put it back in the block.

Veggies aren't meat.

This is the same for my frying pans. Just rinse them. When was the last time you saw someone use soap to clean a bbq?


This is why you're way more likely to get food poisoning at home. Or at least at this guys home


You'll never get food poisoning from a frying pan, because you don't use soap. Or have you heard of people getting food poisoning, from not washing their bbq with soap?

Note I didn't say pots. Boiling isn't anywhere near as hot as frying.

The knife? The horrors! I do rinse it and remove all biological matter. Yes, there's still some there. I assure you the wooden block people use, is teeming with bacteria, so do you wash the knife before using it?

I wonder. I often pick fruit from trees, sometimes spit on it and then brush it off on my shirt. Do you do the same?

When you get home from the grocery store, do you wash all your veggies with soap? Or do you just use water? What about your fruit? All washed with soap?

If not, I assure you the fruit and vegetables are far worse than the knife, rinsed off.

And yes, I do wash my hands before preparing food -- and just before eating it. Veggies just aren't meat.

If they were, you'd never see someone eating an apple right from the store, a tree, or not washing them with soap first. I mean honestly, the grocery store apple has often traveled thousands of miles in a crate on a ship, been handled by people putting the food out, been handled by other customers, you, been in a bag that isn't sterilized.

I wonder again, how many use soap on that apple?

Of course when I eat an apple all that's left is the stem, so people are picky anyhow.


Wood chopping boards are actually not teeming with bacteria.

https://www.sciencedirect.com/science/article/pii/S0362028X2...


Wood blocks, you put knives in like this:

https://boutiquedelabalayeuse.com/products/bloc-a-couteaux-p...

They're exceptionally popular in many places. It's not like people wash the holes, some are decades old. And at no point did I say they were 'teeming' with bacteria, I used it as an example of a thing not cleaned.

It's not like stainless is teeming with bacteria either, espcially when you rinse a knife off. It's far less porous and craggily than those wooden blocks after a decade of use.

The logic is simple; compare these things to other actions. Otherwise it's all show and theatre to make one feel good, like the TSA.


It's not about cleaning, wood has antibacterial properties, it sucks the moisture out of bacteria and kills them. That's true for both wood chopping boards and blocks. There is plenty of literature about it: https://www.sciencedirect.com/science/article/pii/S266676572...

> And at no point did I say they were 'teeming' with bacteria

Is this not a direct quote from your previous comment: "I assure you the wooden block people use, is teeming with bacteria"?


Apparently so, re: teeming.

But the problem isn't just wood, it's also long term dirt accumulation. And this study is absolutely not validation of your point, stating "Despite the many investigations on the topic, the antibacterial activity of wood is far from fully understood", while also saying different species, and hard vs softwood all have different tested effectiveness.

This is also about dry wood, yet I've seen countless people put their knifes away wet/damp. Some of these blocks rarely have time to dry.

I've also seen mould growing on soap, damp debris, and these are things which end up in the block's slots... never washed or cleaned.

I'm not saying don't use them, I'm saying it's silly to wash frying pans with soap, or vegetables only use knives with soap. Not needed.


Just because I can't have my cooking utensils sterile 100% of the time doesn't mean I can't put minimal effort into reducing the risk. I don't want to cook on frying pans covered in rancid oil and dust. The recommendation is to use soap even for cast iron pans.

Maybe some people do, but I also don't put any wet dishes and cutlery away, I have a dish drainer. If I found my soap was growing mould, I'd throw it in the bin, not write it off as a thing that happens and there's no need to worry about it.

> this study is absolutely not validation of your point, stating "Despite the many investigations on the topic, the antibacterial activity of wood is far from fully understood"

This is standard boilerplate present in nearly any paper, scientists never claim that a topic is fully understood and doesn't require any further research.


The point is, you're using TSA logic, and made up issues like dust and rancid oil exhibit that.

I also notice you haven't responded about washing food with soap. Or about BBQing. Please don't tell me you throw away canned food past its best before, too. That would crush my soul further.

If you do, please wait until tomorrow to do so, so I may steel myself for the shock.


I don't know what TSA logic is. Oil absolutely gets gooey and disgusting if left for a few days. It's fine if that's not an issue for you, I shouldn't criticise people's personal taste, but I prefer not to eat that.

On your other questions, I will refer you back to my earlier comment in case you missed it: "Just because I can't have my cooking utensils sterile 100% of the time doesn't mean I can't put minimal effort into reducing the risk."


> I don't know what TSA logic is. Oil absolutely gets gooey and disgusting if left for a few days.

That's an issue when there is enough of it to affect the final meal, not if there are microscopic amounts left on the surface of the pan. Just wiping a pan down is enough if you use it regularly.

Rancid oil is also not going to give you food poisoning like the guy that started this subthread claimed was happening.


This is absurd. Do you also worry about particulates in the ear you're breathing all the time?


No. This is not a solution.

While git LFS is just a kludge for now, writing a filter argument during the clone operation is not the long-term solution either.

Git clone is the very first command most people will run when learning how to use git. Emphasized for effect: the very first command.

Will they remember to write the filter? Maybe, if the tutorial to the cool codebase they're trying to access mentions it. Maybe not. What happens if they don't? It may take a long time without any obvious indication. And if they do? The cloned repo might not be compilable/usable since the blobs are missing.

Say they do get it right. Will they understand it? Most likely not. We are exposing the inner workings of git on the very first command they learn. What's a blob? Why do I need to filter on it? Where are blobs stored? It's classic abstraction leakage.

This is a solved problem: Rsync does it. Just port the bloody implementation over. It does mean supporting alternative representations or moving away from blobs altogether, which git maintainers seem unwilling to do.


I totally agree. This follows a long tradition of Git "fixing" things by adding a flag that 99% of users won't ever discover. They never fix the defaults.

And yes, you can fix defaults without breaking backwards compatibility.


> They never fix the defaults

Not strictly true. They did change the default push behaviour from "matching" to "simple" in Git 2.0.


So what was the second time the stopped watch was right?

I agree with GP. The git community is very fond of doing checkbox fixes for team problems that aren’t or can’t be set as defaults and so require constant user intervention to work. See also some of the sparse checkout systems and adding notes to commits after the fact. They only work if you turn every pull and push into a flurry of activity. Which means they will never work from your IDE. Those are non fixes that pollute the space for actual fixes.


I’ve used git since its inception. Never once in an “IDE”. Should users that refuse to learn the tool really be the target?

I’m not trying to argue that interface doesn’t matter. I use jq enough to be in that unfortunate category where I despise its interface. But it is difficult for me to imagine being similarly incapable in git.


Developers who insist that tools and techniques are personal rather than a group decision generally get talked about unkindly. We are all in this together and you have to support things you don’t even use. That’s the facts on the ground, and more importantly, that’s the job.


> Should users that refuse to learn the tool really be the target?

Maybe not, but that's not the only group of people that are affected. It also affects beginners and people that don't want to exhaustively read the manual.

Should they be the target? Obviously yes.


> The cloned repo might not be compilable/usable since the blobs are missing.

Only the histories of the blobs are filtered out.


> This is a solved problem: Rsync does it.

Can you explain what the solution is? I don't mean the details of the rsync algorithm, but rather what it would like like from the users' perspective. What files are on your local filesystem when you do a "git clone"?


When you do a shallow clone, no files would be present. However when doing a full clone you’ll get a full copy of each version of each blob, and what is being suggested is treat each revision as an rsync operation upon the last. And the more times you muck with a file, which can happen a lot both with assets and if you check in your deps to get exact snapshotting of code, that’s a lot of big file churn.


The overwhelming majority of large assets (images, audio, video) will receive near-zero benefit from using the rsync algorithm. The formats generally have massive byte-level differences even after small “tweaks” to a file.


Video might be strictly out of scope for git, consider that not even youtube allows 'updating' a video.

This will sound absolutely insane, but maybe the source code for the video should be a script? Then the process of building produces a video which is a release artifact?


This is relatively niche, but that's a thing for anime fan-encodes. Some groups publish their vapoursynth scripts, allow you to produce the same re-encoding (given you have the same source video). e.g.:

* https://github.com/LightArrowsEXE/Encoding-Projects

* https://github.com/Beatrice-Raws/encode-scripts


Hm, the video itself would probably be referenced by an indexable identifier like "Anime X Season 1 Chapter 5", and provisioning of the actual video would be up to the builder to get (probably from some torrent network or from DVD although no one will do that)


> This will sound absolutely insane, but maybe the source code for the video should be a script? Then the process of building produces a video which is a release artifact?

It already kinda is, but that just means you now need access to all the raw footage, and rendering a video file in high quality & good compression takes a long time.

https://en.wikipedia.org/wiki/Edit_decision_list


I see, I think in that case the raw video format would still be the source code along with the EDL. What I'm suggesting is that the raw footage would still be an output from the source code that would be the script and the filming plans.

Silly idea, but it's worth thinking about this stuff in an era where the line between source code and target code is being blurred with prompts.


Which is the problem only if you see of "building" as something that should be instantaneous or take a couple of hours tops.

This is similar to replicability in science, there's some experiments that are inmensely expensive to replicate, like LHC, but it still IS replicable technically.


That is nowhere near practical for even basic use cases like a website or a mobile app.


Isn't it? In practice it means that the "video" should live outside of the git repo, you could just download it from an external repo, and you always have the script to recreate it if it ever goes down.

For example:

PromotionalDemo.mp4.script

"Make a video 10 seconds long showcasing the video, a voice in off should say 'We can click here if we want to do this, or click there if we want to go there'. 1024*768 resolution. Male voice. Perky attitude"


A lot of video editing includes splicing/deleting some footage, rather than full video rework. rsync, with its rolling hash approach, can work wonders for this use-case.


Maybe a manual filter isn't the right solution, but this does seem to add a lot of missing pieces.

The first time you try to commit on a new install, git nags you to set your email address and name. I could see something similar happen the first time you clone a repo that hits the default global filter size, with instructions on how to disable it globally.

> The cloned repo might not be compilable/usable since the blobs are missing.

Maybe I misunderstood the article, but isn't the point of the filter to prevent downloading the full history of big files, and instead only check out the required version (like LFS does).

So a filter of 1 byte will always give you a working tree, but trying to checkout a prior commit will require a full download of all files.


Would it be incorrect to say that most of the bloat relates to historical revisions? If so, maybe an rsync-like behavior starting with the most current version of the files would be the best starting point. (Which is all most people will need anyhow.)


> Would it be incorrect to say that most of the bloat relates to historical revisions?

Based on my experience (YMMV), I think it is incorrect, yes, because any time I've performed a shallow clone of a repository, the saving wasn't as much as one would intuitively imagine (in other words: history is stored very efficiently).


Doing a bit of digging seems to confirm that, considering that git actually does remove a lot of redundant files during the garbage collection phase. It does however store complete files (unlike a VCS like mercurial which stores deltas) so nonetheless it still might benefit from a download-the-current-snapshot-first approach.


> It does however store complete files (unlike a VCS like mercurial which stores deltas)

The logical model of git is that it stores complete files. The physical model of git is that these complete files are stored as deltas within pack files (except for new files which haven't been packed yet; by default git automatically packs once there are too many of these loose files, and they're always packed in its network protocol when sending or receiving).


Yes, the problem really stems from the fact that git "understands" text files but not really anything other than that, so it can't really make a good diff between say a jpeg and its updated version, so it simply relies on compression for those other formats.

It would be nice to have a VCS that could manage these more effectively but most binary formats don't lend themselves to that, even when it might be an additional layer to an image.

I reckon there's still room for better image and video formats that would work better with VCS.


Exactly. If large files suck in git then that's because the git backend and cloning mechanism sucks for them. Fix that and then let us move on.


That's exactly what these changes do, but they don't become the default because a lot of people only store text in got so they don't want the downsides of these changes


What changes? The partial clone stuff doesn't help me given that I generally want the large files to be checked out. And how does the large object provider stuff work if you're not using a git forge.


It is a solution. The fact beginners might not understand it doesn't really matter, solutions need not perish on that alone. Clone is a command people usually run once while setting up a repository. Maybe the case could be made that this behavior should be the default and that full clones should be opt-in but that's a separate issue.


"Will they remember to write the filter? Maybe, "

Nothing wrong with "forgetting" to write the filter, and then if it's taking more than 10 minutes, write the filter.


What? Why would you want to expose a beginner to waiting 10 minutes unnecessarily. How would they even know what they did wrong or what's a reasonable time to wait, ask chatgpt "why is my git clone taking 10 minutes"?!

Is this really the best we can do in terms of user experience? No. git need to step up.


Git is not for beginners in general. And large repos are less for beginners.

A beginner will follow instructions in a README "Run git clone" or "run git clone --depth=1


Because people take advantage of your kindness and leave you feeling used.


Unfortunately it's this attitude which perpetuates those kinds of actions. Of course it never starts off that way, it starts off as just wanting to protect yourself from harm, but you can eventually justify just about anything with the argument that its necessary for your "survival" (not literal survival, of course, but you get the idea).

"If I don't exploit this person's kindness now, I'll fall behind those who do and they'll use that leverage against me" gives you some idea


Actually everyone starts off kind. That many people ends up that way speaks to the core of the human condition.


>Actually everyone starts off kind

You only need to hang around toddlers or teens for less than a day to realize people do not start off kind.

People start off egocentric. Unaware or unable to take in to account the people around them are individuals with conflicting wants to you. Also unaware that we are egocentric BUT with social instinct built in to us: if we are surrounded by miserable people, or people angry at us, we don't feel good either.

So we learn that kindness, while sometimes initially painful or less opportunistic, in the long term leads to satisfaction.


Sorry, by "it doesnt start off that way" I didn't mean that people don't start off kind, I meant that people don't start off excusing exploitation


How so? Babies will bite their mothers trying to get food - it's instinctive, but it's not kind. Kindness needs to be taught despite any natural propensity towards it.


I think this is a cynical take-- you can be kind without being a doormat.


It's a very difficult balance to strike imo. People do take niceness and humor as signs that you're not quite as "professional". Of course, other people don't make this mistake, but we don't live in a vacuum - sometimes the jellybrains have control over our promotions.


The difficulty is why it requires intelligence to achieve. It is easy to be mean, and easy to be kind to your own detriment. Being kind while still thriving yourself takes thought.


That's because niceness and humor are often just a mask for being unsure, inconcise, or at worst plain unkind. Being kind is much harder, it requires thoroughly judging the situation, including considering own interests, and then responding in a genuine manner.


I think people are confusing what kindness means here.

It’s not about not protecting yourself against abuse but rather not taking advantage of people.

Being kind doesn’t mean you can’t compete or strategize but rather don’t cheat if you do.

Compassion and acts of charity is kindness.


“Do not mistake my kindness for weakness.” is a handy mantra to help avoid that.


That's a rationalization ... a justification for being unkind. Kind people simply don't say such things.


If you are so smart, why are people taking advantage of you?



It requires ones own mind to fell “taken advantage of” - if one is smart enough to be kind, one most remember to be kind to oneself as well, and not care about what the sad critters gets from the leftovers.

Stoicism promote exactly this virtue of understanding that you are in control of interpreting your own feelings.


On the other hand, your feelings don't exist in a vacuum, disconnected from your external state. If you genuinely feel taken advantage of, no amount of self-delusion is going to make you truly over it, until you acknowledge the source of it, and take steps to protect yourself against it in the future.

Very easy to over-extend stoicism to your own detriment, physically and mentally.


I really hate that you’re downvoted here - it’s a sad truth, too many in this world are here to “get the bag” and will do this to you. Over and over.


Especially people in this forum. Tech is a magnet for these types.


I'm pleased that such a cynical rationalization for not being a good person was downvoted.


It's the sad reality of the society we live in. Money matters the most. Nothing else.

Kind people always get taken advantage of at work. Others take credit and then left abandoned once there's no more value to the company. I guess that's just capitalism.


You need to move into a different industry/society. These things are not ubiquitous.


Agreed. We call those people assholes. We try our best to avoid hiring those people and we weed them out of our company as fast as possible if they're discovered. We also try to have as flat a structure as possible so nobody is taking credit for anyone else's work and ideally many of us are working together so we all share the glory or frustration when something goes well or not.


I do think the flat hierarchy thing is commendable for many reasons.

That said, don't think that just because you (try to) have few bosses that there isn't some form of hierarchy in which people don't take credit for other people's work.

Sure, maybe there's no boss by title that people suck up to and take credit for stuff to look good to them. But there very definitely will be the "alphas" in the group that everyone looks up to and wants to look good to and the taking credit for stuff will be done to impress those people.

So, if you weed out this kind of stuff successfully well enough, again, I commend you. But I doubt it's as complete as you may want to think. It's just a different looking game of favours and sucking up to with less easily visible (can't just look at title to figure out who to suck up to) lines.

For some people this will be positive as they're good at figuring out who to suck up to in that situation while others may need the title to figure that out. I bet many socially awkward / socially less aware people find it easier to navigate titles they can read in an org chart than sniffing these out of the "sociosphere".


There is no society where this doesn't happen.


Never has a colleague taken credit for the work I've done. On the contrary, often in demos and other presentations they've thanked or acknowledged my support even when they didn't need to if they were the driver. I know the world can be harsh but my work life experience gives me no reason at all to be cynical.


Kite Runner and the Handmaids Tale talk about child sexual abuse. I'm not condoning the ban, just pointing out that these two are not the same thing.

Worth adding: Making the Bible available to common folk was also hotly contested at the time. The Puritans lost that fight and I suspect they will eventually lose this one too.


Without upgrsding the wiring to a thicker gauge? That's not code compliant and is likely to cause a fire.


Sorry just to specify, it was more like a 20 amp I think (I will verify), it wasn't like I was going way higher.

I don't remember whether he ran another wire though. It was 5 years ago. Maybe I should not be spreading this anecdote without complete info.

He was a legit electrician that I've worked with for years, specifically because he doesn't cut corners. So I'm sure he did The Right Thing™.


If this is north america we're talking about, then 14 gauge is the standard for 120V 15A household circuits. By code, 20A requires 12 gauge. You'll notice the difference right away, it's noticeably harder to bend. Normally a house or condo will only have 15A wires running to circuits in the room. It's definitely not a standard upgrade, the 12 gauge wire costs a lot more per foot, no builder will do it unless the owner forks over extra dough.

Unless you performed the upgrade yourself or know for a fact that the wiring was upgraded to 12 gauge, it's very risky to just upgrade the breaker. That's how house fires start. It's worth it to check. If you know which breaker it is, you can see the gauge coming out. It's usually written on the wire.


I was actually under the impression that it is allowed depending on the length of the conductor, but it seems you are right. The NEC Table 15(B)(16) shows the maximum allowed ampacity of 14 AWG cables is 20 amperes, BUT... there is a footnote that states the following:

> * Unless otherwise specifically permitted elsewhere in this Code, the overcurrent protection for conductor types marked with an asterisk shall not exceed 15 amperes for No. 14 copper, 20 amperes for No. 12 copper, and 30 amperes for No. 10 copper, after any correction factors for ambient temperature and number of conductors have been applied.

I could've sworn there were actually some cases where it was allowed, but apparently not, or if there is, I'm not finding it. Seems like for 14 AWG cable the breaker can only be up to 15 amperes.


There is a chance he did not run new wires if he was able to ascertain that the wire gauge was sufficient to carry 20 amps over the length of the cable. This is a totally valid upgrade though it does obviously require you to be pretty sure you know the length of the entire circuit. If it was Southwire Romex, you can usually tell just by looking at the color of the sheathing on the cable (usually visible in the wallboxes.)


This is facetious. Some protection is better than no protection.

If "every little bit helps" is true for the environment, it's also true for cryptography, and vice versa.


> Some protection is better than no protection.

No, not really.

Algorithms tend to fall pretty squarely in either the “prevent your sibling from reading your diary” or the “prevent the NSA and Mossad from reading your Internet traffic” camps.

Computers get faster every year, so a cipher with a very narrow safety margin will tend to become completely broken rapidly.


That classification has more steps.

Some things must be encrypted well enough so that even if NSA records them now, even 10 years or 20 years later they will not be able to decipher them.

Other things must be encrypted only well enough so that nobody will be able to decipher them close to real time. If the adversaries decipher them by brute force after a week, the data will become useless by that time.

Lightweight cryptography is for this latter use case.


The hard part is that anything designed to be breakable in a week is only ~150x away in strength from being broken in an hour. That means you need to be incredibly confident about how strong your algorithm is. It's much easier to eat a little bit of cost such that you think it's invulnerable for thousands of years because that way you don't need to worry about a factor of 2 here and there.


Right. Strength in a shadow contest like cryptography can be, best case, estimated to sixteen-bit orders of magnitude (+-65000x). Just because you can't break it doesn't mean somebody else secretly knows a game changing way to break it. So you keep padding with huge exponential hedges such that if they scan shave a dozen bits off the strength of the scheme, it's still secure under finite resources.

Playing close to the margin is super dangerous.


The environment? As in trees?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: