Hacker News new | past | comments | ask | show | jobs | submit | steipete's comments login

lol they blocked my account without any reason given. Sure. Do what you preach.


How are they going to make money?


We've explained this in the blog post here: https://www.prisma.io/blog/prisma-the-complete-orm-inw24qjea...

I think this quote summarizes our plans nicely: Prisma's vision is to democratize the custom data access layer used by companies like Facebook, Twitter and Airbnb and make it available to development teams and organizations of all sizes.

The open-source ORM we're launching today will of course remain open-source and we'll keep investing into it since it's be the foundation for the commercial tools that folks will be able to use on top.


How does “democratize the custom data access layer” make money?


I have no idea what this means either. Sounds like a bunch of buzzwords about nothing.


It means that Prisma will provide a data access layer (we call it "application data platform" [1]) similar to the custom data access layers built by big companies [2] (e.g. TAO by Facebook or Strato by Twitter) that enables application developers to better access their databases.

We've explained that in the blog post here in the "Open-source, and beyond"-section [3].

Does that help? :)

[1] https://imgur.com/O1lwo0v.png

[2] https://imgur.com/Hb9VOWN.png

[3] https://www.prisma.io/blog/prisma-the-complete-orm-inw24qjea...


I think we all appreciate the links but unless I'm missing something, no, it doesn't help -- what's the plan to make money? That blog post doesn't mention anything along those lines at all.


They probably won't, will end up being sold to a bigger co and the tool will be transferred to the "community" for maintenance. See RethinkDB.


I don't think they will.


RevenueCat. The rest are just copycats.


MIT license. Easy to copy and host cheaper. Curious how long this will work before you change it.


I was wondering the exact same thing, I think the MIT license is a really bad choice here for reasons explained here: https://www.lesinskis.com/MIT-licence-community-hostile.html


We have a enterprise edition in works. We will launch it soon. Whatever is MIT now, will stay as MIT.


Hope MIT core/features will continue to be improved, not abandoned. Adding new large features as enterprise only makes sense.


The community edition remains a core part of our vision and is also the base on which the enterprise features are build. So rest assured that we will keep working on improving them.


screenshotting


Haha! for sure.


It works surprisingly well though, been using it since the early betas.


Well, I did have one problem when I upgraded homebrew on Big Sur. For some reason I got the `brew link unbound` returns `/usr/local/sbin is not writable` error. I looked on Ask Different and found (for an earlier version of OSX) this [1] Basically, mkdir the directory.

[1] https://apple.stackexchange.com/questions/312330/brew-link-u...


that was the same with catalina when it first came out, same warning so I don't think this will be an issue indeed, also in catalina it worked fine regardless of the warning


I’ve read the reports but I’m not seeing this. Normal workflow and already over 11 GB that swaps.


I think the point is that memory bandwidth and SSD bandwidth and latency have improved so much that M1-based Macs are really fast at swapping. Combine that with compression (which macOS has done for a long time) and I could imagine iPadOS-like performance.

For normal users (such as people not compiling V8), swapping may be virtually unnoticeable.


Macs have had fast SSDs for a while and as you say compression has also been done for a long time. M1 made no difference whatsoever to memory usage. The posts similar to above seem to just be an attempt to fanfic-justify what's an actual regression for some.

For many people 16GB is absolutely enough, and Apple had to start somewhere. But if 16GB wasn't enough for you yesterday, it isn't enough for you now.


The reviews of the M1 MacBook Air and Pro show a dramatic increase in SSD read/write speeds over the 2020 Intel models.


Do you have a link? MKBD just said it was "slightly faster" than his comparison system and the numbers for the M1 appear to be around 3GB/s reads and 2.7GB/s writes in disk speed test, which is pretty typical of a last generation NVME drive (current pcie gen4 drives are hitting 7GB/s reads & 5GB/s writes)


which is pretty typical of a last generation NVME drive

Those numbers are about 2-3x as fast as what my 2020 Intel-based MacBook Air gets.


Then sounds like Apple put some really garbage SSDs in the 2020 Intel-based Air, not sure what you're looking for here?

But 2-3x won't change swap from grinding to a halt to perfectly smooth, either. It's still over 20x slower than RAM.

More significant for swap usage though is random reads and read latency, neither of which are going to be particularly impressive on the M1's SSDs. You need something like Optane to make that a compelling case.


It's possible that M1 has enabled the use of better memory compression algorithms.


I do wonder if it’s taking advantage of the cheap CPU power and memory bandwidth to be much more aggressive about memory compression. That could reduce swapping for some workloads, though probably not a v8 build.


What do you mean exactly? It is normal for swapping to happen even if you are using less than the physical memory, so as to trade rarely-used RAM for frequently-used disk cache which improves overall throughput


Thanks for reporting! Bummer, for me I think I will have to wait for an M2.


You can use arch -x86_64 and install the Intel versions. Small perf overhead but not really noticable.


Does MacOS auto detect that they are x86 binaries and run them through Rosetta automatically after installing?


Yes.


Unless you use anything that does JIT compilation (e.g. JVM), e.g. JetBrains IDEs are really slow onder Rosetta.

Another pain point is code that relies heavily on SIMD. AVX/AVX2/AVX-512 instructions are not translated, so in those cases you'll be using slower kernels. My machine learned models are ~10 times slower with libtorch under Rosetta. Though they are competitive when compiling libtorch natively.

(Source: I have used an M1 Air for a week, but decided to return it.)


Does this mean python performance is finally competitive with those fancy JIT languages now?


Nope.


> used an M1 Air for a week, but decided to return it.

out of curiosity, why did you return it?


This project has no issue tracker.


Author here: I’m gay, not cis.

But: I agree that women have it even harder - I can only share my own experience though, in the hope it’s interesting to some folks.


"cis white male" is a phrase quoted directly from your piece. I double-checked before posting to make sure I got it right. And double-checked again before replying here.


fyi, "cis" generally means "your gender identity = the gender you were assigned at birth", as opposed to being trans. it's possible to be gay + trans, or gay + cis, etc, but being gay doesn't describe anything about cis vs trans in-and-of-itself.

(thank you for the post, by the way! i thought there were some genuinely neat little tips in there.)


Ah, Hacker News. Never disappoints. Was hoping to inspire some folks.


I think it's a huge step forward that internet folk can discount you now because "all you are" is a gay white man expressing your beliefs.


At least you got the traffic. Never look a gift horse... etc etc!


Traffic from a site that distills your blog post to "cringy" might not be something you want.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: