No, because the trick is that you're paying a knowledgeable person to run the benchmark. That person would presumably actively iterate on the benchmarks and try to detect / avoid cheating.
"we are already safe from privacy concerns because we've already regulated facial recognition! Why spend any more money on privacy, isn't it enough? We have more pressing issues to deal with!"
Statements like that make great TV news commentary, but they have little basis in reality. Governance rarely deals in sweeping structural changes. Even things generally though of as such, like Civil Rights, came from a long series of small reforms leading to bigger reforms.
Except with google it’s becoming impossible to find links from several years ago. Unless I remember exactly the right keywords and maybe the website name...
How is this FOMO? The entire point of bookmarks is “maybe I’ll come back later to this”. Bookmarks let you save content that you come across that may not be relevant at this time, but could be useful later. There is no penalty for saving pages because space is virtually unlimited, and we can only collect so many bookmarks anyway. Searching through bookmarks isn’t a difficult task, and it’s way better than relying on google.
> The entire point of bookmarks is “maybe I’ll come back later to this”
I think for me there is so much information coming at me every day, that I rarely have time to go back to something later. I bookmark a select number of sites for projects/interests I am working on but I couldn't deal with having ~5k worth of bookmarks in my backlog. I just don't have time for managing/triaging/reviewing that.
My comment about FOMO addresses the fear of 'what if I can't find this stuff again'. It's more akin to a hoarding mentality, I suppose. How many people's garages are full of junk they 'think they might come back to later'. It's something I actively eschew because if I allow that thought process to enter into my life I'll be hindered by the activity, and the ability to let go is important to my happiness and wellbeing. I literally can't sleep at night if I think that I might missed out on archiving something I might need. I find it enlightening to treat the world as ephemeral.
I freely admit different people work/think/live in different ways and perhaps my original comment was a little flippant.
Google (and related services, such as youtube) allows you to use before:YYYY-MM-DD and after:YYYY-MM-DD to only show results that were created before or after a certain date.
I think you're missing the wood for the trees a bit there. The point is that there's so much on Google now - and a lot of it is effectively just crap data like pinterest posts - that it becomes increasingly impossible to find specific needles in that haystack. If I know at some point in the last 10 years I've read an article online that could have been written any point in the last 30 years, before and after dates don't really help.
You're suggesting spending a lot of time and hassle trying to wrestle with search engines instead of just keeping a bookmark that can be found and accessed very quickly?
2600 has an essay, "The Mysteries of the Hidden Internet", on this phenomenon in their autumn 2019 quarterly. It was quite a fun read! Check it out at Barnes and Noble. Or buy a copy (https://store.2600.com/products/autumn-2019).
We just use hyper directly, with a small amount of glue code to use serde_json, serde_urlencoded for body parsing and a very simple (and very fast) router of our own creation. This approach also made it very simple for us to introduce std-future/async-await in a gradual way.
I've been in the process of switching to tonic for the last few of weeks. Based on hyper/tower and has async/await support. It's a gRPC server rather than HTTP, so I've been using it with grpc-gateway to provide an HTTP OpenAPI v2 interface.
It has automated quite a few things I found dull to do in other server frameworks, from the gRPC .proto file I can generate:
- Rust server request/response types and request handler stubs.
- grpc-gateway files and an OpenAPI v2 json spec.
- Client side typescript types and request/response handlers from the OpenAPI spec.
So now the process of adding a new endpoint is much less time consuming and involves writing less repetitive and error prone code. As mentioned above I've only been using it a few weeks but so far it has been great, easier to use than actix-web and felt no less responsive.
Noice. To me, Gotham makes more sense because it depends on stable, whereas Rocket uses nightly. Anything depending on nightly seems inherently more fragile and un-future-proof (no pun intended).
I've been following this quite closely. Most of the work seems to be done. But development is very stop-start in fits and bursts, so it's hard to tell how much longer it will take.
Actix was not at all "shit". People just took some issues with a few usages of unsafe and how the author responded to issues/PRs. I've been using actix for about a year now, and was excited for the 0.2.0 updates.
That said, pulling the code entirely and deleting issues was a rather immature response.
As a rust user I just don't know what another good alternative is. Warp, maybe? I'm debating about just writing http stuff in Golang due to its robust standard library instead of picking among several fledgling 3rd party http libraries.
How about we don’t victim blame people, and instead decry unconstitutional seizure of assets.