Hacker Newsnew | past | comments | ask | show | jobs | submit | wutwut5521's commentslogin

When I did iOS dev I could get by okay with an ipod touch, since I did not need cell or GPS testing. These are $200 new, $150 refurbed for the prev edition. The other route to go is a device farm service, IIRC AWS offers this nowadays.


The Parler situation was shoddy engineering, nothing else. They chose a hosting provider with a completely incompatible ToS for their application. There are thousands of “free speech” providers out there. To make an analogy, they tried to publish porn on their Disney.com profile.

It was somewhat funny to watch them completely blow their one moment in time due to sheer incompetence.


I don't think that's accurate at all. Parler was condemned because of a much publicized narrative of its role in hosting speech related to J6 protests, right? And it was shown contemporaneously that there was far more activity on other platforms (facebook) coordinating those protests than on Parler, though they suffered not. Nonetheless, Parler was blacklisted - app pulled (which looked like selective enforcement of ToS). When they attempted to set up on other platforms it was difficult to find a home, in part due to similar de jure application of ToS, but also arguably due to a lack of substantive competition in that space. I say de jure application of ToS because the ToS are not applied consistently, and seem discretionary from the outside. It is not like porn on Disney, but more like (exactly like) talking politics on facebook. Sometimes it's okay, sometimes it's not. Leaving aside flagrantly bad examples of talking politics, your best predictor of how ToS will be enforced is by understanding the alignment of speech to the official institutional messaging. We can see this at work in a very recent example - talking about the COVID lab leak hypothesis would get you banned when the government was messaging (and invested in) a natural origin hypothesis. Now that they are not, such speech will not get you banned. The nature of the speech has not changed. Have the terms of service delineated these circumstances sufficiently? I would say not at all. We can infer what speech will be banned based on an acceptance criteria that is only loosely coupled to the actual language of the terms, and more easily explained by (documented) coordination between the government and social media platforms.

I think the "funny" you express may be schadenfreude based on your political alignment, but there are some transcendent and substantive issues of political principles that are worth examining (and I feel, criticizing) regardless of one's blue team/red team alignment. By transcendent, I mean these problems come back to bite both the red team and the blue team in time. This is classic "Cast it into the fire" type stuff.

https://www.youtube.com/watch?v=ajUlhaX9hQI

https://www.youtube.com/watch?v=GI_fvDOUCu8


So, you agree there is a market that violates TOS of AWS.

Back to the original point, there you go. That is the market trump's social network targets.


> are we to assume you enjoy seeing tons of 1-to-1 Wordle clones in the app store

The article is about rejecting a clone game app, the author is just complaining that this rule is enforced without 100% consistency. If I had to guess Apple hit some sort of clone limit on this concept and started cracking down on new submissions. So author missed the bus.

> and not caring about quality in the slightest

Dunno where this claim comes from, but the author literally spent 1 day on their app lol.


> So author missed the bus.

The author specifically mentions that many of these other clones were approved after Wörd was rejected.


> Dunno where this claim comes from, but the author literally spent 1 day on their app lol.

So you did not read the article. Got it.


I find the site useful and want to pay, but don’t want a browser extension.


Why would you be required to download the extension?


Agree on many of these points, but:

> marijuana > addictive chemicals

is one claim I am not familiar with (usually I hear “psychologically addictive”, like any pleasurable activity), but maybe I am out of date on the latest research. Anyways, swap coffee/caffeine for marijuana and this makes a bit more sense to me.


> is one claim I am not familiar with (usually I hear “psychologically addictive”, like any pleasurable activity), but maybe I am out of date on the latest research.

That's because there is a lot of misinformation spread by stoner types that try to rationalize cannabis usage as "non-addictive". Speaking as a former stoner myself (with nothing against smoking, I just don't partake anymore).

Cannabis use can absolutely cause physical dependence. It can absolutely "cause" (be part of) psychological dependence. It can absolutely be part of a pattern of addiction. The idea that weed is "not addictive" is simply propaganda.

Now, it is true that when compared to drugs like benzodiazapenes, alcohol, and opioids, that cessation of long-term cannabis consumption causes much less severe side effects. You don't get seizures from stopping weed abruptly. But there are absolutely still withdrawal symptoms.

TL;DR: It's a myth that cannabis is a magic plant that one can't get addicted to. It's basically a cope put forward primarily by internet stoners.


To clarify the point the parent is making; if BTC goes belly-up, how will you get those “computation/energy assets” returned to the coinholders?

Parent is just pointing out a fundamental difference on how the underlying assets actually relate to the security.


Ever try to convince an exchange or hosted wallet to give you back your money when it thinks otherwise?


the good news is that I don't need to rely on third-parties to securely host and manage my wallets. It still is an option, but not a hard requirement to access global financial transactions.


These functions do different things though, right?

push() mutates an array in-place, and concat() completely duplicates the existing array and adds another array to it.

    > x=[]; x.push(...[1,2]); console.log(x);
    [1, 2]
    > x=[]; x.concat([1]); console.log(x);
    []
So clearly push should be faster in general in usecases like these, since it does not need to copy.

Edit: grammar and copy paste failures from my console


The issue is that the community 5-10 years ago heavily favored "pure" approaches like Array#concat rather than mutations like Array#push. So a whole new generation of developers were taught to avoid the functions at all costs. "never use push" is a common mantra in JS circles, and developers favored making a copy of an array even if that array was used nowhere else (where push would've been the appropriate choice) Maybe we're seeing a push back towards performance over purity?


> Maybe we're seeing a push back towards performance over purity?

No, because performance is generally not a huge concern on the front-end. I'm not applying ML strategies to hundreds of thousands of data points, I'm trying to render 10 elements instead of 9. Performance is so rarely a concern that I'd always err on the side cleaner code than hyper-performant code. This stuff isn't even worth thinking about.


>performance is generally not a huge concern on the front-end

Everyone who has ever used a web app already knows this unfortunately.


You're like the third person to make this snarky comment in this thread. I promise you the JavaScript runtime is not the bottleneck.


Javascript isn't just a frontend language though, is it?

> I'm not applying ML strategies to hundreds of thousands of data points

Maybe you aren't, but the folks over at tensorflow.js [1] certainly are, as well as Andrej Karpathy's ConvnetJS [2].

[1] https://www.tensorflow.org/js [2] https://cs.stanford.edu/people/karpathy/convnetjs/demo/class...


Which is why I specified I was talking about the front-end.


There's plenty of demand for and work on frontend JS performance as well, e.g, https://greensock.com/


You're being pedantic -- I'm not saying that there performance-critical JS code doesn't exist, I'm saying that the push for readability over performance isn't going anywhere.


> This stuff isn't even worth thinking about.

If more front-end developers have this mindset, I'm beginning to understand how the web (and the desktop, via electron and such) has become the embarrassingly slow and unusable mess that it is nowadays.


I can promise you the JavaScript runtime is not the cause of whatever "slow and unusable mess" you've experienced.


True, but these things add up. It is definitely not a problem at one spot; make those spots 100 and the problem manifests itself very obviously.

I am a fan of expressive and readable code as well and have fully migrated to functional languages in the last 2-3 years. But I don't think in JS we have the luxury to ignore a 945x performance improvement on a very low-level building-block function. It's used in thousands of places.

So you know, I agree with your premise. As a compromise I'd make an utility that reads much better than `Array.push` but still uses it internally (if such a tool does not already exist).


Performance might not be a concern for you, but it is your users who should be deciding.


As if my users would ever notice. Some of them are still using IE9.


Immutable.js has good functionality to allow you to batch pure operations if you don’t actually need the intermediate values.


It depends on the implementation. In some cases, where arrays are required to be stored in contiguous memory, appending to an array will result in a copy anyway since the entire contents will have to be moved to a new, larger block of memory.


Typically (I don't know about in Javascript in particular), a vector has both a length and a capacity. The capacity represents total allocated space (always >= the length). If excess capacity is available when you append, that gets used rather than having to copy the array. The capacity grows exponentially (powers of 2 or 1.5 or something). This means that if you take an empty vector and append N elements one at a time, it does O(N) total copying of existing values rather than O(N^2). Another way to put it is that a single append is amortized constant time.

Here's a stackoverflow answer which talks a bit more about this: https://cs.stackexchange.com/a/9382


edit: oops, nvm


You're forgetting that half the array is never copied. The total number of copies doesn't exceed a linear cN, so it's O(N), not O(N log(N)).

The constant bound on the number of copies depends on the growth factor. With a growth factor of 2, the constant is 2. As long as the growth factor is greater than 1, the number of copies is linear.

You can verify this yourself with some trivial code:

    struct CountsCopies {
      CountsCopies() {}
      CountsCopies(const CountsCopies& other) { ++CopyCount(); }
      CountsCopies& operator=(const CountsCopies& other) { ++CopyCount(); return *this; }
      static std::size_t& CopyCount() {
        static std::size_t count = 0;
        return count;
      }
    };

    std::size_t last_copies = 0;
    std::vector<CountsCopies> v;
    for (int i = 0; i < 1000; ++i) {
      const std::size_t copies = CountsCopies::CopyCount();
      if (copies != last_copies) {
        std::cout << "size=" << i << " copies=" << copies
                  << " ratio=" << copies / static_cast<double>(i) << '\n';
        last_copies = copies;
      }
      v.emplace_back();
    }
Output:

    size=2 copies=1 ratio=0.5
    size=3 copies=3 ratio=1
    size=5 copies=7 ratio=1.4
    size=9 copies=15 ratio=1.66667
    size=17 copies=31 ratio=1.82353
    size=33 copies=63 ratio=1.90909
    size=65 copies=127 ratio=1.95385
    size=129 copies=255 ratio=1.97674
    size=257 copies=511 ratio=1.98833
    size=513 copies=1023 ratio=1.99415


No, since the early array copies are on smaller arrays they take far less than O(n) time. The total time in copies is 1 + 2 + 4 + ... + 2^m + ... 2^(floor log2 n), which equals 2^(ceil log2 n) - 1, or O(n).


The number of copies required is a geometric series. With a growth factor of 2x, the number of total copies is only 2N - 1. Thus, it is O(N).

It's O(N) for any geometric growth rate, but using 2x makes it easy to see, because every copy is one larger than all previous copies combined. Consider a concrete example for N=9: 1 + 2 + 4 + 8 = 7 + 8.


It really does depend on the implementation. Only the logical address space needs to be contiguous, not the underlying physical memory. A clever enough implementation could work hand-in-hand with a clever enough allocator, so that realloc would just add new logical pages to the end of the underlying buffer when it’s time to double its size without having to copy anything.


Your virtual address space needs room for that, clever allocators can’t violate the laws of physics. Also, you can only do this at page size granules.

At any-rate, it’s not like copying will happen anyways during GC.


Is this really done? It seems like this approach would require all arrays to be allocated with at least a page's worth of memory, which seems incredibly wasteful if you have a bunch of small arrays with a handful of floats or something.


”It seems like this approach would require all arrays to be allocated with at least a page's worth of memory”

I don’t know whether it is done in practice, but you can postpone doing that until the array gets ‘big’, for some definition of ‘big’.


Allocating a bunch of small arrays is going to be wasteful (performance wise) anyway. That being said, I'm not sure if this is done frequently or not.


Yes, they are completely different functions with completely different purposes. If you're still doing mutable programming for some god-forsaken reason using concat to mutate an array is a misuse of the function.


> If you're still doing mutable programming for some god-forsaken reason

I believe one such "god-forsaken reason" was given to us by the title of the link...

> JavaScript Array.push is 945x faster than Array.concat


I thought that the main cause of slowness was the fact that the accumulator array is being copied one time per array to concatenate. That means that the first array is actually being reallocated n times, where n is the number of input arrays. This is not a necessary feature of immutability; it's a problem with this particular use case.

Another big problem is that the article's benchmark is busted [1]. The author thought they were just concatenating two arrays of a fixed length a bunch of times. But what's actually happening is that arr1 is being built up because it is reused for each test case. That means that the concat version is doing A LOT of copying of the data. If you fix the test so that each run concatenates only two 50k arrays, concat is faster [2].

[1]: https://jsperf.com/javascript-array-concat-vs-push/226

[2]: https://jsperf.com/javascript-array-concat-vs-push/228


Read GP's whole sentence; you're agreeing with them.


> If you're still doing mutable programming for some god-forsaken reason

What is mutable programming? Using mutable objects is something I do everyday. Am I doing things wrong?


> Am I doing things wrong?

No. Mutating state that is shared between different parts of a program is often a bad idea. The functional programming community learned the first half of that, and now goes round preaching the mistaken idea that all mutation is bad.


Not everyone, or even most of the FP community does that. There is even a famous paper on how lambda is the ultimate imperative.


Is useful in single threaded programs too, an example is to avoid having to do deep copies everywhere (which is less performant than using persistent data structures) as to have multiple versions in time of some data that you can hang on to.


I don't think they're referring to multiple threads when they say different part of the program, but instead that you should be able to reason about mutation locally.

For example, if I pass an argument into a function, it may be 'unexpected' that the argument is mutated - I can not reason about that mutation locally (unless it's very explicit or a known idiom such as push).

However, within a function, avoiding mutation seems pointless as you should have no trouble reasoning about it. At some point you really are just throwing away performance with significantly diminishing benefits.

Shared mutability across threads is definitely a huge pain in the ass though.

In the end I think we're all just trying to reduce the state space we have to manage in our heads when we read and write code, and removing mutability reduces that space.


However, within a function, avoiding mutation seems pointless as you should have no trouble reasoning about it. At some point you really are just throwing away performance with significantly diminishing benefits

Ok, but here you are doing all the manual work of creating a copy as to avoid mutating the arg/returning a new one and, it may be less peformant because of whole copy, knowing your programming language automtically defaults and does this for you in a performant way is a big win for reducing cognitive overhead in large programs.


I don't disagree, but the way Erlang (and thus Elixir) does it is that you get a small stack space to contain your immutable data -- and when you unwind the function the stack just gets thrown away instead of involving the GC which I'd argue is still plenty fast.

I do agree that copying stuff around is generally expensive. I am just unsure how expensive it is in smaller functions that aren't called 10_000 times a second.


Immutability is a common part of functional programming approaches. In Javascript, it's especially common in the React+Redux community. Redux expects you will update data immutably, and React works best when you do immutable updates as well.

Here's some good overviews of why and how to do immutable updates in JS:

https://redux.js.org/faq/immutable-data

https://redux.js.org/recipes/structuring-reducers/immutable-...

https://daveceddia.com/react-redux-immutability-guide/


The idea of "editing" stuff instead of creating new versions of them.

Whilst mutable programming is faster on write, it is much more difficult to figure out if something has changed, so any function that needs to only do work when stuff has changed (e.g. a React component), it is much much better to use immutable style programming because you only have to see if the memory address has changed as opposed to deeply compare current and previous objects.


If you are doing deep structure compare to check if an object was "changed", you're doing "mutable programming" wrong.

IMHO, if you're doing deep compare on anything for any reason, it's usually a sign that the data model is on a shaky ground.


I disagree that mutable code is inherently faster to write. Like any paradigm you can adopt, it probably feels that way igfyou start injecting immutability into an existing project, but sooner or later you settle into different design patterns which support it better, and it's not faster or slower to write. Probably faster to debug though.


Until hardware changes away from being an intrinsically imperative machines, immutable approaches will be slower simply because the hardware doesn’t really support that.

We saw FP hardware leading to performance boosts kind of happen with GPUs (pixel and vertex shaders are just transformers), but then they got back to imperative again with GPGPU.


It is not much better. It is something different. It's popular because of the way React works. It is not always correct because it is easier to understand how state changes. Using array push over concat shouldn't make understanding more difficult and using the slower function for this reason is missing the point.


Only if some hardcore functional purist is around.


concat() does not mutate the array, it creates a new one.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Go back to 2017. Mutable programming is back because of the benefits it offers


An efficient immutable vector can be concatenated much faster without any reallocation of either array; one would probably outperform .push for bigger arrays in this case.


How is that true? push shouldn't be reallocating the array on each call. You seem to be alluding to a linked list of arrays.


You might be thinking of libsass. sassc is dead simple and solves all of this users issues.


Correct me if I'm wrong, but doesn't sassc require the user to compile it?


You have to compile it, just like this poster has to compile their go code. There are usually packages and formulas for it in most std repos.


You're right, on OSX you can brew install sassc


there's a Debian package :) thanks for the tip :)


just got this working with fswatch and sassc, thought I'd feed back that this is perfect, thanks :)


Heh, I just woke up and read this and thought “well thats enough problem solving for today” :) Glad we could help you avoid the hell that is adding npm to a non-node project!


Another thank you from here! I've had to use node just for sass in the past and this is a much nicer solution.


Wow, the first readme linked is horribly ugly and provides no context above the fold. It looks like the zodiac killer’s rear bumper.

https://github.com/zold-io/zold


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: